Warning: There is no cluster found.
[ROHALOG:16908786] Automatic Release of Resource: Start
cl_get_path[249]: '.' is in the current path. This use is accepted, even though the directory cannot be checked at run time.
cl_get_path[249]: '.' is in the current path. This use is accepted, even though the directory cannot be checked at run time.
cl_get_path[249]: '.' is in the current path. This use is accepted, even though the directory cannot be checked at run time.
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
Warning: There is no cluster found.
:get_local_nodename[56] nodename=''
:get_local_nodename[57] rc=255
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 255 != 0 ))
:get_local_nodename[58] exit 255
cl_get_path[249]: '.' is in the current path. This use is accepted, even though the directory cannot be checked at run time.
:clmanageroha[318] : version='@(#)' 5881272 43haes/usr/sbin/cluster/events/clmanageroha.sh, 61aha_r726, 2205A_aha726, May 16 2022 12:15 PM
:clmanageroha[321] clodmget -n -f connection_type HACMPhmcparam
:clmanageroha[321] CONN_TYPE=0
:clmanageroha[321] typeset -i CONN_TYPE
:clmanageroha[323] clodmget -q name=' and object like POWERVS_*' -nf name HACMPnode
:clmanageroha[323] 2> /dev/null
:clmanageroha[323] [[ -n '' ]]
:clmanageroha[326] export CONN_TYPE
:clmanageroha[331] roha_session_open -o release -s -t
:clmanageroha[roha_session_open:131] roha_session.id=18547048
:clmanageroha[roha_session_open:132] date
:clmanageroha[roha_session_open:132] LC_ALL=C
:clmanageroha[roha_session_open:132] roha_session_log 'Open session 18547048 at Sat Jan 28 16:40:08 KORST 2023'
[ROHALOG:18547048:(0.094)] Open session 18547048 at Sat Jan 28 16:40:08 KORST 2023
:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
:clmanageroha[roha_session_open:146] roha_session.operation=release
:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
:clmanageroha[roha_session_open:143] roha_session.systemmirror_mode=1
:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
:clmanageroha[roha_session_open:152] online_rgs_skip=1
:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
:clmanageroha[roha_session_open:163] [[ release != @(acquire|release|adjust) ]]
:clmanageroha[roha_session_open:168] no_roha_apps=0
:clmanageroha[roha_session_open:168] typeset -i no_roha_apps
:clmanageroha[roha_session_open:169] need_explicit_res_rel=0
:clmanageroha[roha_session_open:169] typeset -i need_explicit_res_rel
:clmanageroha[roha_session_open:187] [[ -n '' ]]
:clmanageroha[roha_session_open:188] [[ -z '' ]]
:clmanageroha[roha_session_open:188] clmgr q roha
ERROR: no cluster is defined.
:clmanageroha[roha_session_open:188] [[ -z '' ]]
:clmanageroha[roha_session_open:189] roha_session_log 'INFO: No ROHA configured on applications.\n'
[ROHALOG:18547048:(0.578)] INFO: No ROHA configured on applications.
[ROHALOG:18547048:(0.578)]
:clmanageroha[roha_session_open:190] no_roha_apps=1
:clmanageroha[roha_session_open:195] read_tunables
:clmanageroha[roha_session_open:196] echo ''
:clmanageroha[roha_session_open:196] grep -q
Usage: grep [-r] [-R] [-H] [-L] [-E|-F] [-c|-l|-q] [-insvxbhwyu] [-p[parasep]] -e pattern_list...
[-f pattern_file...] [file...]
Usage: grep [-r] [-R] [-H] [-L] [-E|-F] [-c|-l|-q] [-insvxbhwyu] [-p[parasep]] [-e pattern_list...]
-f pattern_file... [file...]
Usage: grep [-r] [-R] [-H] [-L] [-E|-F] [-c|-l|-q] [-insvxbhwyu] [-p[parasep]] pattern_list [file...]
:clmanageroha[roha_session_open:197] (( 2 == 0 ))
:clmanageroha[roha_session_open:202] (( 1 == 1 ))
:clmanageroha[roha_session_open:203] roha_session_read_odm_dynresop DLPAR_MEM
:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_MEM -nf value HACMPdynresop
:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
:clmanageroha[roha_session_read_odm_dynresop:816] out=''
:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
:clmanageroha[roha_session_open:203] (( 0 == 0.00 ))
:clmanageroha[roha_session_open:204] roha_session_read_odm_dynresop DLPAR_PROCS
:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_PROCS -nf value HACMPdynresop
:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
:clmanageroha[roha_session_read_odm_dynresop:816] out=''
:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
:clmanageroha[roha_session_open:204] (( 0 == 0 ))
:clmanageroha[roha_session_open:205] roha_session_read_odm_dynresop DLPAR_PROC_UNITS
:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_PROC_UNITS -nf value HACMPdynresop
:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
:clmanageroha[roha_session_read_odm_dynresop:816] out=''
:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
:clmanageroha[roha_session_open:205] (( 0 == 0.00 ))
:clmanageroha[roha_session_open:206] roha_session_log 'INFO: Nothing to be done.\n'
[ROHALOG:18547048:(0.635)] INFO: Nothing to be done.
[ROHALOG:18547048:(0.635)]
:clmanageroha[roha_session_open:207] exit 0
[ROHALOG:16908786] Automatic Release of Resource: End
rc.init: Removed /usr/es/sbin/cluster/.cthags.exit file.
Warning: There is no cluster found.
Jan 28 2023 17:10:20 EVENT START: admin_op clrm_start_request 28696 0
|2023-01-28T17:10:20|28696|EVENT START: admin_op clrm_start_request 28696 0|
:admin_op[110] trap sigint_handler INT
:admin_op[116] OP_TYPE=clrm_start_request
:admin_op[116] typeset OP_TYPE
:admin_op[117] SERIAL=28696
:admin_op[117] typeset -li SERIAL
:admin_op[118] INVALID=0
:admin_op[118] typeset -li INVALID
The administrator initiated the following action at Sat Jan 28 17:10:20 KORST 2023
Check smit.log and clutils.log for additional details.
Starting PowerHA cluster services on node: epprda in normal mode...
Jan 28 2023 17:10:23 EVENT COMPLETED: admin_op clrm_start_request 28696 0 0
|2023-01-28T17:10:23|28696|EVENT COMPLETED: admin_op clrm_start_request 28696 0 0|
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 28697
Cluster services started on node 'epprda'
Enqueued rg_move acquire event for resource group epprd_rg.
Node Up Completion Event has been enqueued.
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_JOIN_NODE_DEP|2023-01-28T17:10:25|28697|
|CLUSTER_RG_MOVE_ACQUIRE|epprd_rg|
|NODE_UP_COMPLETE|
|EVENT_PREAMBLE_END|
Jan 28 2023 17:10:27 EVENT START: node_up epprda
|2023-01-28T17:10:27|28697|EVENT START: node_up epprda|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:10:27.662239
+ echo '|2023-01-28T17:10:27.662239|INFO: node_up|epprda'
+ 1>> /var/hacmp/availability/clavailability.log
:node_up[182] version=%I%
:node_up[185] NODENAME=epprda
:node_up[185] export NODENAME
:node_up[193] STATUS=0
:node_up[193] typeset -li STATUS
:node_up[194] RC=0
:node_up[194] typeset -li RC
:node_up[195] ENABLE_NFS_CROSS_MOUNT=false
:node_up[196] START_MODE=''
:node_up[196] typeset START_MODE
:node_up[198] set -u
:node_up[200] (( 1 < 1 ))
:node_up[200] (( 1 > 2 ))
:node_up[207] : serial number for this event is 28697
:node_up[210] [[ epprda == epprda ]]
:node_up[213] : Remove the node halt lock file.
:node_up[214] : Hereafter, clstrmgr failure leads to node halt
:node_up[216] rm -f /usr/es/sbin/cluster/etc/ha_nodehalt.lock
:node_up[219] (( 1 > 1 ))
:node_up[256] : If RG_DEPENDENCIES=false, process RGs with clsetenvgrp
:node_up[258] [[ TRUE == FALSE ]]
:node_up[281] : localnode processing prior to RG acquisition
:node_up[283] [[ epprda == epprda ]]
:node_up[283] [[ '' != forced ]]
:node_up[286] : Reserve Volume Groups using SCSIPR
:node_up[288] clodmget -n -q policy=scsi -f value HACMPsplitmerge
:node_up[288] SCSIPR_ENABLED=''
:node_up[288] typeset SCSIPR_ENABLED
:node_up[289] [[ '' == Yes ]]
:node_up[334] : Setup VG fencing. This must be done prior to any potential disk access.
:node_up[336] node_up_vg_fence_init
:node_up[node_up_vg_fence_init:73] typeset VGs_on_line
:node_up[node_up_vg_fence_init:74] typeset VG_name
:node_up[node_up_vg_fence_init:75] typeset VG_ID
:node_up[node_up_vg_fence_init:76] typeset VG_PV_list
:node_up[node_up_vg_fence_init:79] : Find out what volume groups are currently on-line
:node_up[node_up_vg_fence_init:81] lsvg -L -o
:node_up[node_up_vg_fence_init:81] 2> /var/hacmp/log/node_up.lsvg.err
:node_up[node_up_vg_fence_init:81] print caavg_private rootvg
:node_up[node_up_vg_fence_init:81] VGs_on_line='caavg_private rootvg'
:node_up[node_up_vg_fence_init:82] [[ -e /var/hacmp/log/node_up.lsvg.err ]]
:node_up[node_up_vg_fence_init:82] [[ ! -s /var/hacmp/log/node_up.lsvg.err ]]
:node_up[node_up_vg_fence_init:82] rm /var/hacmp/log/node_up.lsvg.err
:node_up[node_up_vg_fence_init:85] : Clean up any old fence group files and stale fence groups.
:node_up[node_up_vg_fence_init:86] : These are all of the form '/usr/es/sbin/cluster/etc/vg/.uud'
:node_up[node_up_vg_fence_init:88] valid_vg_lst=''
:node_up[node_up_vg_fence_init:89] lsvg -L
:node_up[node_up_vg_fence_init:89] egrep -vw 'rootvg|caavg_private'
:node_up[node_up_vg_fence_init:89] 2>> /var/hacmp/log/node_up.lsvg.err
:node_up:datavg[node_up_vg_fence_init:91] PS4_LOOP=datavg
:node_up:datavg[node_up_vg_fence_init:92] clodmget -q $'name like \'*VOLUME_GROUP\' and value = datavg' -f value -n HACMPresource
:node_up:datavg[node_up_vg_fence_init:92] [[ -z datavg ]]
:node_up:datavg[node_up_vg_fence_init:109] : Volume group datavg is an HACMP resource
:node_up:datavg[node_up_vg_fence_init:111] [[ 'caavg_private rootvg' == ?(*\ )datavg?(\ *) ]]
:node_up:datavg[node_up_vg_fence_init:115] fence_height=ro
:node_up:datavg[node_up_vg_fence_init:119] : Recreate the fence group to match current volume group membership
:node_up:datavg[node_up_vg_fence_init:121] cl_vg_fence_redo -c datavg ro
:cl_vg_fence_redo[52] version=1.3
:cl_vg_fence_redo[55] RC=0
:cl_vg_fence_redo[55] typeset -li RC
:cl_vg_fence_redo[58] : Check for optional -c parameter
:cl_vg_fence_redo[60] [[ -c == -c ]]
:cl_vg_fence_redo[62] c_flag=-c
:cl_vg_fence_redo[63] shift
:cl_vg_fence_redo[66] VG=datavg
:cl_vg_fence_redo[67] UUID_file=/usr/es/sbin/cluster/etc/vg/datavg.uuid
:cl_vg_fence_redo[68] fence_height=ro
:cl_vg_fence_redo[70] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.uuid ]]
:cl_vg_fence_redo[83] [[ -z ro ]]
:cl_vg_fence_redo[98] : Rebuild the fence group for datavg
:cl_vg_fence_redo[99] : First, find the disks in the volume group
:cl_vg_fence_redo[101] /usr/sbin/getlvodm -v datavg
:cl_vg_fence_redo[101] VGID=00c44af100004b00000001851e9dc053
:cl_vg_fence_redo[103] [[ -n 00c44af100004b00000001851e9dc053 ]]
:cl_vg_fence_redo[106] : Create a fence group for datavg
:cl_vg_fence_redo[108] /usr/sbin/getlvodm -w 00c44af100004b00000001851e9dc053
:cl_vg_fence_redo[108] cut -f2 '-d '
:cl_vg_fence_redo[108] PV_disk_list=$'hdisk2\nhdisk3\nhdisk4\nhdisk5\nhdisk6\nhdisk7\nhdisk8'
:cl_vg_fence_redo[109] cl_vg_fence_init -c datavg ro hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8
cl_vg_fence_init[145]: version @(#) 7d4c34b 43haes/usr/sbin/cluster/events/utils/cl_vg_fence_init.c, 726, 2147A_aha726, Feb 05 2021 09:50 PM
cl_vg_fence_init[204]: odm_initialize()
cl_vg_fence_init[231]: calloc(7, 64)
cl_vg_fence_init[259]: getattr(hdisk2, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk3, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk4, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk5, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk6, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk7, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk8, PCM) = PCM/friend/fcpother
cl_vg_fence_init[294]: sfwAddFenceGroup(datavg, 7, hdisk2, hdisk3, hdisk4, hdisk5, hdisk6, hdisk7, hdisk8)
cl_vg_fence_init[374]: free(200101b8)
cl_vg_fence_init[400]: creat(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_vg_fence_init[408]: write(/usr/es/sbin/cluster/etc/vg/datavg.uuid, 16)
cl_g_fence_init[442]: sfwSetFenceGroup(vg=datavg, height=ro(2) uuid=ec2db4422261eae02091227fb9e53c88):cl_vg_fence_redo[110] RC=0
:cl_vg_fence_redo[111] : Exit status is 0 from cl_vg_fence_init datavg ro hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8
:cl_vg_fence_redo[113] (( 0 != 0 ))
:cl_vg_fence_redo[123] return 0
:node_up:datavg[node_up_vg_fence_init:122] valid_vg_lst=' datavg'
:node_up:datavg[node_up_vg_fence_init:125] [[ -e /var/hacmp/log/node_up.lsvg.err ]]
:node_up:datavg[node_up_vg_fence_init:125] [[ ! -s /var/hacmp/log/node_up.lsvg.err ]]
:node_up:datavg[node_up_vg_fence_init:125] rm /var/hacmp/log/node_up.lsvg.err
:node_up:datavg[node_up_vg_fence_init:128] : Any remaining old fence group files are from stale fence groups,
:node_up:datavg[node_up_vg_fence_init:129] : so remove them
:node_up:datavg[node_up_vg_fence_init:131] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.uuid ]]
:node_up:datavg[node_up_vg_fence_init:133] ls /usr/es/sbin/cluster/etc/vg/datavg.uuid
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:135] PS4_LOOP=/usr/es/sbin/cluster/etc/vg/datavg.uuid
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:136] VG_name=datavg.uuid
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:137] VG_name=datavg
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:138] [[ ' datavg' == ?(*\ )datavg?(\ *) ]]
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:141] : Just redid the fence group for datavg
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:143] continue
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:158] unset PS4_LOOP
:node_up[node_up_vg_fence_init:160] return 0
:node_up[344] : If WLM manager clases have been configured for an application server, process them now
:node_up[346] clodmget -q $'name like \'WLM_*\'' -f id HACMPresource
:node_up[346] [[ -n '' ]]
:node_up[371] : Call ss-load replicated resource methods if they are defined
:node_up[373] cl_rrmethods2call ss_load
:cl_rrmethods2call[56] version=%I%
:cl_rrmethods2call[84] RRMETHODS=''
:cl_rrmethods2call[85] NEED_RR_ENV_VARS=no
:cl_rrmethods2call[104] : The load and unload methods if defined are returned on the
:cl_rrmethods2call[105] : local node
:cl_rrmethods2call[107] [[ epprda == epprda ]]
:cl_rrmethods2call[109] NEED_RR_ENV_VARS=yes
:cl_rrmethods2call[129] : Set the '*_REP_RESOURCE' variables if needed.
:cl_rrmethods2call[131] [[ yes == yes ]]
:cl_rrmethods2call[133] cllsres
:cl_rrmethods2call[133] 2> /dev/null
:cl_rrmethods2call[133] eval APPLICATIONS='"epprd_app"' EXPORT_FILESYSTEM='"/board_org"' FILESYSTEM='""' FORCED_VARYON='"false"' FSCHECK_TOOL='"fsck"' FS_BEFORE_IPADDR='"false"' MOUNT_FILESYSTEM='"/board;/board_org"' RECOVERY_METHOD='"sequential"' SERVICE_LABEL='"epprd"' SSA_DISK_FENCING='"false"' VG_AUTO_IMPORT='"false"' VOLUME_GROUP='"datavg"' USERDEFINED_RESOURCES='""'
:cl_rrmethods2call[1] APPLICATIONS=epprd_app
:cl_rrmethods2call[1] EXPORT_FILESYSTEM=/board_org
:cl_rrmethods2call[1] FILESYSTEM=''
:cl_rrmethods2call[1] FORCED_VARYON=false
:cl_rrmethods2call[1] FSCHECK_TOOL=fsck
:cl_rrmethods2call[1] FS_BEFORE_IPADDR=false
:cl_rrmethods2call[1] MOUNT_FILESYSTEM='/board;/board_org'
:cl_rrmethods2call[1] RECOVERY_METHOD=sequential
:cl_rrmethods2call[1] SERVICE_LABEL=epprd
:cl_rrmethods2call[1] SSA_DISK_FENCING=false
:cl_rrmethods2call[1] VG_AUTO_IMPORT=false
:cl_rrmethods2call[1] VOLUME_GROUP=datavg
:cl_rrmethods2call[1] USERDEFINED_RESOURCES=''
:cl_rrmethods2call[137] [[ -n '' ]]
:cl_rrmethods2call[142] [[ -n '' ]]
:cl_rrmethods2call[147] [[ -n '' ]]
:cl_rrmethods2call[152] [[ -n '' ]]
:cl_rrmethods2call[157] [[ -n '' ]]
:cl_rrmethods2call[162] [[ -n '' ]]
:cl_rrmethods2call[167] [[ -n '' ]]
:cl_rrmethods2call[172] [[ -n '' ]]
:cl_rrmethods2call[182] [[ -z '' ]]
:cl_rrmethods2call[184] typeset sysmgdata
:cl_rrmethods2call[185] typeset reposmgdata
:cl_rrmethods2call[186] [[ -x /usr/es/sbin/cluster/xd_generic/xd_cli/clxd_list_mg_smit ]]
:cl_rrmethods2call[191] [[ -n '' ]]
:cl_rrmethods2call[191] [[ -n '' ]]
:cl_rrmethods2call[197] echo ''
:cl_rrmethods2call[199] return 0
:node_up[373] METHODS=''
:node_up[387] : When the local node is brought up, reset the resource locator info.
:node_up[390] clchdaemons -r -d clstrmgr_scripts -t resource_locator
:node_up[397] [[ '' != manual ]]
:node_up[400] : attempt passive varyon for any ECM VGs in serial RGs
:node_up[405] cl_pvo
:cl_pvo[590] version=1.34.2.12
:cl_pvo(0.007)[592] PS4_TIMER=true
:cl_pvo(0.007)[594] rc=0
:cl_pvo(0.007)[594] typeset -li rc
:cl_pvo(0.007)[595] mode=0
:cl_pvo(0.007)[595] typeset -li mode
:cl_pvo(0.007)[600] ENODEV=19
:cl_pvo(0.007)[600] typeset -li ENODEV
:cl_pvo(0.007)[601] vg_force_on_flag=''
:cl_pvo(0.007)[605] : Pick up any passed options
:cl_pvo(0.007)[607] rg_list=''
:cl_pvo(0.007)[607] export rg_list
:cl_pvo(0.007)[608] vg_list=''
:cl_pvo(0.007)[609] fs_list=''
:cl_pvo(0.008)[610] all_vgs_flag=''
:cl_pvo(0.008)[611] [[ -z '' ]]
:cl_pvo(0.008)[613] all_vgs_flag=true
:cl_pvo(0.008)[615] getopts :g:v:f: option
:cl_pvo(0.008)[629] shift 0
:cl_pvo(0.008)[630] [[ -n '' ]]
:cl_pvo(0.008)[645] O_flag=''
:cl_pvo(0.008)[646] odmget -q 'attribute = varyon_state' PdAt
:cl_pvo(0.010)[646] [[ -n $'\nPdAt:\n\tuniquetype = "logical_volume/vgsubclass/vgtype"\n\tattribute = "varyon_state"\n\tdeflt = "0"\n\tvalues = "0,1,2,3"\n\twidth = ""\n\ttype = "R"\n\tgeneric = ""\n\trep = "l"\n\tnls_index = 0' ]]
:cl_pvo(0.010)[649] : LVM may record that a volume group was varied on from an earlier
:cl_pvo(0.010)[650] : IPL. Rely on HA state tracking, and override the LVM check
:cl_pvo(0.010)[652] O_flag=-O
:cl_pvo(0.010)[655] [[ -n true ]]
:cl_pvo(0.010)[657] [[ -z epprda ]]
:cl_pvo(0.010)[661] [[ -z epprda ]]
:cl_pvo(0.010)[672] : Since no resource names of any type were explicitly passed, go
:cl_pvo(0.010)[673] : find all the resource groups this node is a member of.
:cl_pvo(0.012)[675] clodmget -f group,nodes HACMPgroup
:cl_pvo(0.015)[675] egrep '[: ]epprda( |$)'
:cl_pvo(0.016)[675] cut -f1 -d:
:cl_pvo(0.019)[675] rg_list=epprd_rg
:cl_pvo(0.019)[676] [[ -z epprd_rg ]]
:cl_pvo(0.019)[686] [[ -z '' ]]
:cl_pvo(0.019)[686] [[ -n epprd_rg ]]
:cl_pvo(0.019)[689] : Since no volume groups were passed, go find all the volume groups
:cl_pvo(0.019)[690] : in the given/extracted list of resource groups.
:cl_pvo(0.019)[695] : For each resource group that this node participates in, get the
:cl_pvo(0.019)[696] : list of serial access volume groups in that resource group.
:cl_pvo(0.019)[698] clodmget -q 'group = epprd_rg and name = VOLUME_GROUP' -f value -n HACMPresource
:cl_pvo(0.022)[698] rg_vg_list=datavg
:cl_pvo(0.022)[700] [[ -n datavg ]]
:cl_pvo(0.022)[702] [[ -n true ]]
:cl_pvo(0.022)[703] odmget -q $'group = epprd_rg and name like \'*REP_RESOURCE\'' HACMPresource
:cl_pvo(0.024)[703] [[ -n '' ]]
:cl_pvo(0.024)[739] : If there were any serial access volume groups for this node and
:cl_pvo(0.024)[740] : that resource group, add them to the list.
:cl_pvo(0.024)[742] vg_list=datavg
:cl_pvo(0.024)[747] [[ -z '' ]]
:cl_pvo(0.024)[747] [[ -n epprd_rg ]]
:cl_pvo(0.024)[750] : Since no file systems were passed, go find all the file systems in
:cl_pvo(0.024)[751] : the given/extracted list of resource groups.
:cl_pvo(0.024)[755] : For each resource group that this node participates in, get the
:cl_pvo(0.024)[756] : list of file systems in that resource group.
:cl_pvo(0.024)[761] clodmget -q 'group = epprd_rg and name = FILESYSTEM' -f value -n HACMPresource
:cl_pvo(0.027)[761] rg_fs_list=ALL
:cl_pvo(0.027)[763] [[ -n ALL ]]
:cl_pvo(0.027)[765] [[ -n true ]]
:cl_pvo(0.027)[766] odmget -q $'group = epprd_rg and name like \'*REP_RESOURCE\'' HACMPresource
:cl_pvo(0.029)[766] [[ -n '' ]]
:cl_pvo(0.029)[780] : If there were any file systems for this node and that resource
:cl_pvo(0.029)[781] : group, add them to the list
:cl_pvo(0.029)[783] fs_list=ALL
:cl_pvo(0.029)[790] [[ ALL == ALL ]]
:cl_pvo(0.029)[792] continue
:cl_pvo(0.029)[801] : Remove any duplicates from the volume group list
:cl_pvo(0.031)[803] echo datavg
:cl_pvo(0.033)[803] tr ' ' '\n'
:cl_pvo(0.034)[803] sort -u
:cl_pvo(0.038)[803] vg_list=datavg
:cl_pvo(0.038)[805] [[ -z datavg ]]
:cl_pvo(0.038)[814] : Find out what volume groups are currently on-line
:cl_pvo(0.038)[816] lsvg -L -o
:cl_pvo(0.039)[816] 2> /tmp/lsvg.err
:cl_pvo(0.042)[816] print caavg_private rootvg
:cl_pvo(0.042)[816] ON_LIST='caavg_private rootvg'
:cl_pvo(0.042)[819] : If this node is the first node up in the cluster,
:cl_pvo(0.042)[820] : we want to do a sync for each of the volume groups
:cl_pvo(0.042)[821] : we bring on-line. If multiple cluster nodes are already active, the
:cl_pvo(0.042)[822] : sync is unnecesary, having been done once, and possibly disruptive.
:cl_pvo(0.042)[824] [[ -n '' ]]
:cl_pvo(0.042)[833] : No other cluster nodes are present, default to sync just to be sure
:cl_pvo(0.042)[834] : the volume group is in a good state
:cl_pvo(0.042)[836] syncflag=''
:cl_pvo(0.042)[840] : Now, process each volume group in the list of those this node acceses.
:cl_pvo(0.042):datavg[844] PS4_LOOP=datavg
:cl_pvo(0.042):datavg[844] typeset PS4_LOOP
:cl_pvo(0.042):datavg[846] : Skip any concurrent GMVGs, they should never be pvo.
:cl_pvo(0.043):datavg[848] odmget -q name='GMVG_REP_RESOURCE AND value=datavg' HACMPresource
:cl_pvo(0.046):datavg[848] [[ -n '' ]]
:cl_pvo(0.046):datavg[853] : The VGID is what the LVM low level commands used below use to
:cl_pvo(0.046):datavg[854] : identify the volume group.
:cl_pvo(0.046):datavg[856] /usr/sbin/getlvodm -v datavg
:cl_pvo(0.049):datavg[856] vgid=00c44af100004b00000001851e9dc053
:cl_pvo(0.049):datavg[860] mode=99
:cl_pvo(0.049):datavg[863] : Attempt to determine the mode of the volume group - is it an
:cl_pvo(0.049):datavg[864] : enhanced concurrent mode volume group or not.
:cl_pvo(0.049):datavg[868] export mode
:cl_pvo(0.049):datavg[869] hdisklist=''
:cl_pvo(0.050):datavg[870] /usr/sbin/getlvodm -w 00c44af100004b00000001851e9dc053
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[871] hdisklist=hdisk2
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[871] hdisklist='hdisk2 hdisk3'
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4'
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5'
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6'
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7'
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.052):datavg[870] read pvid hdisk
:cl_pvo(0.052):datavg[873] get_vg_mode 'hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8' 00c44af100004b00000001851e9dc053 datavg
:cl_pvo(0.052):datavg[get_vg_mode:289] typeset vgid vg_name syncflag hdisklist
:cl_pvo(0.052):datavg[get_vg_mode:290] typeset GROUP_NAME FORCED_VARYON
:cl_pvo(0.052):datavg[get_vg_mode:291] TUR_RC=0
:cl_pvo(0.052):datavg[get_vg_mode:291] typeset -li TUR_RC
:cl_pvo(0.052):datavg[get_vg_mode:292] vg_disks=0
:cl_pvo(0.052):datavg[get_vg_mode:292] typeset -li vg_disks
:cl_pvo(0.052):datavg[get_vg_mode:293] max_disk_test=0
:cl_pvo(0.052):datavg[get_vg_mode:293] typeset -li max_disk_test
:cl_pvo(0.052):datavg[get_vg_mode:294] disk_tested=0
:cl_pvo(0.052):datavg[get_vg_mode:294] typeset -li disk_tested
:cl_pvo(0.052):datavg[get_vg_mode:296] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.052):datavg[get_vg_mode:297] vgid=00c44af100004b00000001851e9dc053
:cl_pvo(0.052):datavg[get_vg_mode:298] vg_name=datavg
:cl_pvo(0.052):datavg[get_vg_mode:299] syncflag=''
:cl_pvo(0.052):datavg[get_vg_mode:301] odmget -q name='datavg and attribute=conc_capable and value=y' CuAt
:cl_pvo(0.053):datavg[get_vg_mode:301] ODMDIR=/etc/objrepos
:cl_pvo(0.055):datavg[get_vg_mode:301] [[ -n $'\nCuAt:\n\tname = "datavg"\n\tattribute = "conc_capable"\n\tvalue = "y"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "l"\n\tnls_index = 0' ]]
:cl_pvo(0.055):datavg[get_vg_mode:304] : If LVM thinks that this volume group is concurrent capable, that
:cl_pvo(0.055):datavg[get_vg_mode:305] : is good enough
:cl_pvo(0.055):datavg[get_vg_mode:307] mode=32
:cl_pvo(0.055):datavg[get_vg_mode:308] return
:cl_pvo(0.055):datavg[876] : See if the volume group is already on line. This should
:cl_pvo(0.055):datavg[877] : only happen if it were manually brought on line outside of HACMP
:cl_pvo(0.055):datavg[878] : control, or left on-line after a forced down.
:cl_pvo(0.055):datavg[880] vg_on_mode=''
:cl_pvo(0.055):datavg[880] typeset vg_on_mode
:cl_pvo(0.055):datavg[881] [[ 'caavg_private rootvg' == ?(*\ )datavg?(\ *) ]]
:cl_pvo(0.056):datavg[891] lsvg -L datavg
:cl_pvo(0.056):datavg[891] 2> /dev/null
:cl_pvo(0.058):datavg[891] grep -q -i -w passive-only
:cl_pvo(0.071):datavg[896] [[ -n '' ]]
:cl_pvo(0.071):datavg[976] : Volume group is currently not on line in any mode
:cl_pvo(0.071):datavg[978] (( 99 == 32 ))
:cl_pvo(0.071):datavg[1041] (( 32 != 32 && 99 != 32 ))
:cl_pvo(0.071):datavg[1060] (( 32 == 32 ))
:cl_pvo(0.071):datavg[1063] : If this is actually an enhanced concurrent mode volume group,
:cl_pvo(0.071):datavg[1064] : bring it on line in passive mode. Other kinds are just skipped.
:cl_pvo(0.071):datavg[1066] varyonp datavg 'hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.071):datavg[varyonp:417] NOQUORUM=20
:cl_pvo(0.071):datavg[varyonp:417] typeset -li NOQUORUM
:cl_pvo(0.071):datavg[varyonp:418] rc=0
:cl_pvo(0.071):datavg[varyonp:418] typeset -li rc
:cl_pvo(0.071):datavg[varyonp:421] : Pick up passed parameters: volume group and sync flag
:cl_pvo(0.071):datavg[varyonp:423] typeset syncflag hdisklist vg
:cl_pvo(0.071):datavg[varyonp:424] vg=datavg
:cl_pvo(0.071):datavg[varyonp:425] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.071):datavg[varyonp:426] syncflag=''
:cl_pvo(0.071):datavg[varyonp:429] : Make sure the volume group is not fenced. Varyon requires read write
:cl_pvo(0.071):datavg[varyonp:430] : access.
:cl_pvo(0.071):datavg[varyonp:432] cl_set_vg_fence_height -c datavg rw
cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0))
:cl_pvo(0.074):datavg[varyonp:433] RC=0
:cl_pvo(0.074):datavg[varyonp:434] (( 19 == 0 ))
:cl_pvo(0.074):datavg[varyonp:442] : Return code from volume group fencing for datavg is 0
:cl_pvo(0.074):datavg[varyonp:443] (( 0 != 0 ))
:cl_pvo(0.074):datavg[varyonp:455] : Try to vary on the volume group in passive concurrent mode
:cl_pvo(0.074):datavg[varyonp:457] varyonvg -c -P -O datavg
:cl_pvo(0.667):datavg[varyonp:458] rc=0
:cl_pvo(0.668):datavg[varyonp:460] (( 0 != 0 ))
:cl_pvo(0.668):datavg[varyonp:483] : exit status of varyonvg -c -P -O datavg is: 0
:cl_pvo(0.668):datavg[varyonp:485] (( 0 == 20 ))
:cl_pvo(0.668):datavg[varyonp:505] : If varyon was ultimately unsuccessful, note the error
:cl_pvo(0.668):datavg[varyonp:507] (( 0 != 0 ))
:cl_pvo(0.668):datavg[varyonp:511] : If varyonvg was successful, try to recover
:cl_pvo(0.668):datavg[varyonp:512] : any missing or removed disks
:cl_pvo(0.668):datavg[varyonp:514] mr_recovery datavg
:cl_pvo(0.668):datavg[mr_recovery:59] vg=datavg
:cl_pvo(0.668):datavg[mr_recovery:59] typeset vg
:cl_pvo(0.668):datavg[mr_recovery:60] typeset mr_disks
:cl_pvo(0.668):datavg[mr_recovery:61] typeset disk_list
:cl_pvo(0.668):datavg[mr_recovery:62] typeset hdisk
:cl_pvo(0.670):datavg[mr_recovery:64] lsvg -p datavg
:cl_pvo(0.670):datavg[mr_recovery:64] 2> /dev/null
:cl_pvo(0.672):datavg[mr_recovery:64] grep -iw missing
:cl_pvo(0.691):datavg[mr_recovery:64] missing_disks=''
:cl_pvo(0.691):datavg[mr_recovery:66] [[ -n '' ]]
:cl_pvo(0.692):datavg[mr_recovery:89] lsvg -p datavg
:cl_pvo(0.692):datavg[mr_recovery:89] 2> /dev/null
:cl_pvo(0.695):datavg[mr_recovery:89] grep -iw removed
:cl_pvo(0.713):datavg[mr_recovery:89] removed_disks=''
:cl_pvo(0.713):datavg[mr_recovery:91] [[ -n '' ]]
:cl_pvo(0.713):datavg[varyonp:518] : Restore the fence height to read only, for passive varyon
:cl_pvo(0.713):datavg[varyonp:520] cl_set_vg_fence_height -c datavg ro
cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=ro(2))
:cl_pvo(0.716):datavg[varyonp:521] RC=0
:cl_pvo(0.716):datavg[varyonp:522] : Return code from volume group fencing for datavg is 0
:cl_pvo(0.716):datavg[varyonp:523] (( 0 != 0 ))
:cl_pvo(0.716):datavg[varyonp:533] return 0
:cl_pvo(0.716):datavg[1073] return 0
:node_up[406] : exit status of cl_pvo is: 0
:node_up[422] ls '/dev/vpath*'
:node_up[422] 1> /dev/null 2>& 1
:node_up[432] : Configure any split and merge policies.
:node_up[434] rm -f /usr/es/sbin/cluster/etc/smm_oflag
:node_up[435] [[ -z '' ]]
:node_up[438] : If this is the first node up, configure split merge handling.
:node_up[440] cl_cfg_sm_rt
:cl_cfg_sm_rt[738] version=1.34
:cl_cfg_sm_rt[741] clctrl_rc=0
:cl_cfg_sm_rt[741] typeset -li clctrl_rc
:cl_cfg_sm_rt[742] src_rc=0
:cl_cfg_sm_rt[742] typeset -li src_rc
:cl_cfg_sm_rt[743] cl_migcheck_rc=0
:cl_cfg_sm_rt[743] typeset -li cl_migcheck_rc
:cl_cfg_sm_rt[744] bad_policy=''
:cl_cfg_sm_rt[745] SMP=''
:cl_cfg_sm_rt[748] : If we are in migration - if all nodes are not up to this level - do not
:cl_cfg_sm_rt[749] : attempt any configuration.
:cl_cfg_sm_rt[751] clmixver
:cl_cfg_sm_rt[751] version=22
:cl_cfg_sm_rt[752] (( 22 < 14 ))
:cl_cfg_sm_rt[761] : Retrieve configured policies
:cl_cfg_sm_rt[763] clodmget -q 'policy = action' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[763] Action=Reboot
:cl_cfg_sm_rt[764] clodmget -q 'policy = split' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[764] Split=None
:cl_cfg_sm_rt[765] clodmget -q 'policy = merge' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[765] Merge=Majority
:cl_cfg_sm_rt[766] clodmget -q 'policy = tiebreaker' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[766] TieBreaker=''
:cl_cfg_sm_rt[767] clodmget -q 'policy = nfs_quorumserver' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[767] nfs_quorumserver=''
:cl_cfg_sm_rt[768] clodmget -q 'policy = local_quorumdirectory' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[768] local_quorumdirectory=''
:cl_cfg_sm_rt[769] clodmget -q 'policy = remote_quorumdirectory' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[769] remote_quorumdirectory=''
:cl_cfg_sm_rt[770] clodmget -q 'policy = anhp' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[770] is_anhp=''
:cl_cfg_sm_rt[771] clodmget -q 'policy = scsi' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[771] is_scsi=''
:cl_cfg_sm_rt[772] clodmget -q name=clutils.log -f value -n HACMPlogs
:cl_cfg_sm_rt[772] CLUTILS_LOG=/var/hacmp/log/clutils.log
:cl_cfg_sm_rt[775] : If policies are unset, apply the default policies
:cl_cfg_sm_rt[777] Split=None
:cl_cfg_sm_rt[778] Merge=Majority
:cl_cfg_sm_rt[779] Action=Reboot
:cl_cfg_sm_rt[782] : If tiebreaker was a configured policy, be sure that one was defined
:cl_cfg_sm_rt[784] [[ -z '' ]]
:cl_cfg_sm_rt[786] [[ None == TieBreaker ]]
:cl_cfg_sm_rt[790] [[ Majority == TieBreaker ]]
:cl_cfg_sm_rt[795] [[ -n '' ]]
:cl_cfg_sm_rt[807] : Set up the interlock file for use by smcaactrl. This tells
:cl_cfg_sm_rt[808] : smcaactrl to allow the following CAA operations.
:cl_cfg_sm_rt[810] date
:cl_cfg_sm_rt[810] 1> /usr/es/sbin/cluster/etc/cl_cfg_sm_rt.26149292
:cl_cfg_sm_rt[811] trap 'on_exit $?' EXIT
:cl_cfg_sm_rt[814] : Setting up CAA tunable local_merge_policy
:cl_cfg_sm_rt[816] typeset -i caa_level
:cl_cfg_sm_rt[817] lslpp -l bos.cluster.rte
:cl_cfg_sm_rt[817] grep bos.cluster.rte
:cl_cfg_sm_rt[817] uniq
:cl_cfg_sm_rt[817] awk -F ' ' '{print $2}'
:cl_cfg_sm_rt[817] tr -d .
:cl_cfg_sm_rt[817] caa_level=725102
:cl_cfg_sm_rt[818] (( 725102 >=7140 ))
:cl_cfg_sm_rt[819] configure_local_merge_policy
:cl_cfg_sm_rt[configure_local_merge_policy:665] typeset -i clctrl_rc
:cl_cfg_sm_rt[configure_local_merge_policy:666] [[ -z '' ]]
:cl_cfg_sm_rt[configure_local_merge_policy:666] [[ -z '' ]]
:cl_cfg_sm_rt[configure_local_merge_policy:667] capability=0
:cl_cfg_sm_rt[configure_local_merge_policy:667] typeset -i capability
:cl_cfg_sm_rt[configure_local_merge_policy:669] cl_get_capabilities -i 6
:cl_cfg_sm_rt[configure_local_merge_policy:669] 2>& 1
:cl_cfg_sm_rt[configure_local_merge_policy:669] caa_sm_capability=$':cl_cfg_sm_rt[configure_local_merge_policy:669] LC_ALL=C\ncl_get_capabilities[178]: version 1.9\ncapability is 6\n\tid: 6 version: 1 flag: 1 '
:cl_cfg_sm_rt[configure_local_merge_policy:670] [[ -n $':cl_cfg_sm_rt[configure_local_merge_policy:669] LC_ALL=C\ncl_get_capabilities[178]: version 1.9\ncapability is 6\n\tid: 6 version: 1 flag: 1 ' ]]
:cl_cfg_sm_rt[configure_local_merge_policy:674] : If Sub Cluster Split Merge capability is defined
:cl_cfg_sm_rt[configure_local_merge_policy:675] : and globally available, then capability is set to 1
:cl_cfg_sm_rt[configure_local_merge_policy:677] capability='1 '
:cl_cfg_sm_rt[configure_local_merge_policy:680] (( 1 == 1 ))
:cl_cfg_sm_rt[configure_local_merge_policy:682] : Sub Cluster Split-Merge capability is available cluster wide
:cl_cfg_sm_rt[configure_local_merge_policy:684] [[ Majority != None ]]
:cl_cfg_sm_rt[configure_local_merge_policy:686] clctrl -tune -o local_merge_policy=h
1 tunable updated on cluster epprda_cluster.
:cl_cfg_sm_rt[configure_local_merge_policy:687] clctrl_rc=0
:cl_cfg_sm_rt[configure_local_merge_policy:688] (( 0 != 0 ))
:cl_cfg_sm_rt[configure_local_merge_policy:725] return 0
:cl_cfg_sm_rt[820] rc=0
:cl_cfg_sm_rt[820] typeset -i rc
:cl_cfg_sm_rt[821] (( 0 < 0 ))
:cl_cfg_sm_rt[827] : Configure CAA in accordance with the specified or defaulted policies
:cl_cfg_sm_rt[828] : for Merge
:cl_cfg_sm_rt[830] clctrl -tune -a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).communication_mode = u
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).config_timeout = 240
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).deadman_mode = a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).dr_enabled = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).link_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).local_merge_policy = h
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).network_fdt = 20000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).no_if_traffic_monitor = 0
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_down_delay = 10000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).packet_ttl = 32
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).remote_hb_factor = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).repos_mode = e
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).site_merge_policy = p
:cl_cfg_sm_rt[831] clctrl_rc=0
:cl_cfg_sm_rt[832] : Return code from 'clctrl -tune -a' is 0
:cl_cfg_sm_rt[835] : If the current deadman mode is not set to ASSERT,
:cl_cfg_sm_rt[836] : change it to that
:cl_cfg_sm_rt[842] clctrl -tune -x deadman_mode
:cl_cfg_sm_rt[842] cut -f2 -d:
:cl_cfg_sm_rt[842] current_deadman_mode=a
:cl_cfg_sm_rt[843] [[ a != a ]]
:cl_cfg_sm_rt[849] : Determine the current site merge policy, to see if it needs
:cl_cfg_sm_rt[850] : to be changed
:cl_cfg_sm_rt[852] clctrl -tune -x site_merge_policy
:cl_cfg_sm_rt[852] cut -f2 -d:
:cl_cfg_sm_rt[852] current_merge_policy=p
:cl_cfg_sm_rt[854] [[ Majority == Manual ]]
:cl_cfg_sm_rt[865] [[ Majority == None ]]
:cl_cfg_sm_rt[878] : Everything else - tie breaker, majority, nfs - is heuristic merge policy
:cl_cfg_sm_rt[880] [[ p != h ]]
:cl_cfg_sm_rt[882] SMP=h
:cl_cfg_sm_rt[883] clctrl -tune -o site_merge_policy=h
1 tunable updated on cluster epprda_cluster.
:cl_cfg_sm_rt[886] clctrl_rc=0
:cl_cfg_sm_rt[887] (( 0 != 0 ))
:cl_cfg_sm_rt[901] [[ -n h ]]
:cl_cfg_sm_rt[904] : Make sure all instances of CAA across the cluster got the word
:cl_cfg_sm_rt[906] /usr/es/sbin/cluster/cspoc/cli_on_cluster -S clctrl -tune -x site_merge_policy
:cl_cfg_sm_rt[906] sort -u
:cl_cfg_sm_rt[906] cut -f3 -d:
clhaver[576]: version 1.14
clhaver[591]: colon delimied output
clhaver[612]: MINVER=5100
clhaver[624]: thread(epprda)
clhaver[144]: cl_gethostbynode epprda
cl_gethostbynode[102]: version 1.1 i_flag=0 given name is epprda
cl_gethostbynode[127]: cl_query nodes=2
cl_gethostbynode[161]: epprda is a PowerHA node name
cl_gethostbynode[313]: epprda is the CAA host matching PowerHA node epprda
clhaver[157]: node epprda resolves to epprda
clhaver[166]: cl_socket(COLLVER epprda epprda)
clhaver[191]: cl_connect(epprda)
clhaver[230]: read(epprda)
clhaver[624]: thread(epprds)
clhaver[144]: cl_gethostbynode epprds
cl_gethostbynode[102]: version 1.1 i_flag=0 given name is epprds
cl_gethostbynode[127]: cl_query nodes=2
cl_gethostbynode[161]: epprds is a PowerHA node name
cl_gethostbynode[313]: epprds is the CAA host matching PowerHA node epprds
clhaver[157]: node epprds resolves to epprds
clhaver[166]: cl_socket(COLLVER epprds epprds)
clhaver[191]: cl_connect(epprds)
clhaver[230]: read(epprds)
epprda: :cl_rsh[99] version=1.4
epprda: :cl_rsh[102] CAA_node_name=''
epprda: :cl_rsh[105] : Process optional flags
epprda: :cl_rsh[107] cmd_flag=-n
epprda: :cl_rsh[108] [[ -n == -n ]]
epprda: :cl_rsh[111] : Remove the no standard input flag
epprda: :cl_rsh[113] shift
epprda: :cl_rsh[124] : Pick up and check the input
epprda: :cl_rsh[126] print 'epprda /usr/es/sbin/cluster/cspoc/cexec eval gdgmgdhehcgmcacnhehfgogfcacnhicahdgjhegffpgngfhcghgffphagpgmgjgdhj'
epprda: :cl_rsh[126] read destination command
epprda: :cl_rsh[127] [[ -z epprda ]]
epprda: :cl_rsh[127] [[ -z '/usr/es/sbin/cluster/cspoc/cexec eval gdgmgdhehcgmcacnhehfgogfcacnhicahdgjhegffpgngfhcghgffphagpgmgjgdhj' ]]
epprda: :cl_rsh[136] /usr/es/sbin/cluster/utilities/cl_nn2hn epprda
epprda: :cl_nn2hn[83] version=1.11
epprda: :cl_nn2hn[86] CAA_host_name=''
epprda: :cl_nn2hn[86] typeset CAA_host_name
epprda: :cl_nn2hn[87] node_name=''
epprda: :cl_nn2hn[87] typeset node_name
epprda: :cl_nn2hn[88] node_interfaces=''
epprda: :cl_nn2hn[88] typeset node_interfaces
epprda: :cl_nn2hn[89] COMM_PATH=''
epprda: :cl_nn2hn[89] typeset COMM_PATH
epprda: :cl_nn2hn[90] r_flag=''
epprda: :cl_nn2hn[90] typeset r_flag
epprda: :cl_nn2hn[93] : Pick up and check the input
epprda: :cl_nn2hn[95] getopts r option
epprda: :cl_nn2hn[106] : Pick up the destination, which follows the options
epprda: :cl_nn2hn[108] shift 0
epprda: :cl_nn2hn[109] destination=epprda
epprda: :cl_nn2hn[109] typeset destination
epprda: :cl_nn2hn[111] [[ -z epprda ]]
epprda: :cl_nn2hn[121] : In order to prevent recursion, first you must prevent recursion...
epprda: :cl_nn2hn[123] [[ '' != TRUE ]]
epprda: :cl_nn2hn[126] : This routine is not being called from cl_query_hn_id, so call it
epprda: :cl_nn2hn[127] : to see if it can find the CAA host name based on a common short
epprda: :cl_nn2hn[128] : id, or match on CAA host name, or match on CAA short name, or
epprda: :cl_nn2hn[129] : similar match in /etc/cluster/rhosts.
epprda: :cl_nn2hn[131] cl_query_hn_id -q -i epprda
epprda: cl_query_hn_id[137]: version 1.2
epprda: cl_gethostbynode[102]: version 1.1 i_flag=105 given name is epprda
epprda: cl_gethostbynode[127]: cl_query nodes=2
epprda: cl_gethostbynode[161]: epprda is a PowerHA node name
epprda: cl_gethostbynode[313]: epprda is the CAA host matching PowerHA node epprda
epprda: :cl_nn2hn[131] CAA_host_name=epprda
epprda: :cl_nn2hn[132] RC=0
epprda: :cl_nn2hn[133] (( 0 == 0 ))
epprda: :cl_nn2hn[136] : The straight forward tests worked!
epprda: :cl_nn2hn[138] [[ epprda == @(+([0-9.])|+([0-9:])) ]]
epprda: :cl_nn2hn[159] [[ -z epprda ]]
epprda: :cl_nn2hn[340] [[ -z epprda ]]
epprda: :cl_nn2hn[345] [[ -n epprda ]]
epprda: :cl_nn2hn[348] : We have found epprda is our best guess at a CAA host name
epprda: :cl_nn2hn[349] : corresponding to epprda
epprda: :cl_nn2hn[351] print epprda
epprda: :cl_nn2hn[352] return 0
epprda: :cl_rsh[136] CAA_node_name=epprda
epprda: :cl_rsh[148] : Invoke clcomd
epprda: :cl_rsh[150] /usr/sbin/clrsh epprda -n '/usr/es/sbin/cluster/cspoc/cexec eval gdgmgdhehcgmcacnhehfgogfcacnhicahdgjhegffpgngfhcghgffphagpgmgjgdhj'
epprda: :cl_rsh[151] return 0
epprds: :cl_rsh[99] version=1.4
epprds: :cl_rsh[102] CAA_node_name=''
epprds: :cl_rsh[105] : Process optional flags
epprds: :cl_rsh[107] cmd_flag=-n
epprds: :cl_rsh[108] [[ -n == -n ]]
epprds: :cl_rsh[111] : Remove the no standard input flag
epprds: :cl_rsh[113] shift
epprds: :cl_rsh[124] : Pick up and check the input
epprds: :cl_rsh[126] read destination command
epprds: :cl_rsh[126] print 'epprds /usr/es/sbin/cluster/cspoc/cexec eval gdgmgdhehcgmcacnhehfgogfcacnhicahdgjhegffpgngfhcghgffphagpgmgjgdhj'
epprds: :cl_rsh[127] [[ -z epprds ]]
epprds: :cl_rsh[127] [[ -z '/usr/es/sbin/cluster/cspoc/cexec eval gdgmgdhehcgmcacnhehfgogfcacnhicahdgjhegffpgngfhcghgffphagpgmgjgdhj' ]]
epprds: :cl_rsh[136] /usr/es/sbin/cluster/utilities/cl_nn2hn epprds
epprds: :cl_nn2hn[83] version=1.11
epprds: :cl_nn2hn[86] CAA_host_name=''
epprds: :cl_nn2hn[86] typeset CAA_host_name
epprds: :cl_nn2hn[87] node_name=''
epprds: :cl_nn2hn[87] typeset node_name
epprds: :cl_nn2hn[88] node_interfaces=''
epprds: :cl_nn2hn[88] typeset node_interfaces
epprds: :cl_nn2hn[89] COMM_PATH=''
epprds: :cl_nn2hn[89] typeset COMM_PATH
epprds: :cl_nn2hn[90] r_flag=''
epprds: :cl_nn2hn[90] typeset r_flag
epprds: :cl_nn2hn[93] : Pick up and check the input
epprds: :cl_nn2hn[95] getopts r option
epprds: :cl_nn2hn[106] : Pick up the destination, which follows the options
epprds: :cl_nn2hn[108] shift 0
epprds: :cl_nn2hn[109] destination=epprds
epprds: :cl_nn2hn[109] typeset destination
epprds: :cl_nn2hn[111] [[ -z epprds ]]
epprds: :cl_nn2hn[121] : In order to prevent recursion, first you must prevent recursion...
epprds: :cl_nn2hn[123] [[ '' != TRUE ]]
epprds: :cl_nn2hn[126] : This routine is not being called from cl_query_hn_id, so call it
epprds: :cl_nn2hn[127] : to see if it can find the CAA host name based on a common short
epprds: :cl_nn2hn[128] : id, or match on CAA host name, or match on CAA short name, or
epprds: :cl_nn2hn[129] : similar match in /etc/cluster/rhosts.
epprds: :cl_nn2hn[131] cl_query_hn_id -q -i epprds
epprds: cl_query_hn_id[137]: version 1.2
epprds: cl_gethostbynode[102]: version 1.1 i_flag=105 given name is epprds
epprds: cl_gethostbynode[127]: cl_query nodes=2
epprds: cl_gethostbynode[161]: epprds is a PowerHA node name
epprds: cl_gethostbynode[313]: epprds is the CAA host matching PowerHA node epprds
epprds: :cl_nn2hn[131] CAA_host_name=epprds
epprds: :cl_nn2hn[132] RC=0
epprds: :cl_nn2hn[133] (( 0 == 0 ))
epprds: :cl_nn2hn[136] : The straight forward tests worked!
epprds: :cl_nn2hn[138] [[ epprds == @(+([0-9.])|+([0-9:])) ]]
epprds: :cl_nn2hn[159] [[ -z epprds ]]
epprds: :cl_nn2hn[340] [[ -z epprds ]]
epprds: :cl_nn2hn[345] [[ -n epprds ]]
epprds: :cl_nn2hn[348] : We have found epprds is our best guess at a CAA host name
epprds: :cl_nn2hn[349] : corresponding to epprds
epprds: :cl_nn2hn[351] print epprds
epprds: :cl_nn2hn[352] return 0
epprds: :cl_rsh[136] CAA_node_name=epprds
epprds: :cl_rsh[148] : Invoke clcomd
epprds: :cl_rsh[150] /usr/sbin/clrsh epprds -n '/usr/es/sbin/cluster/cspoc/cexec eval gdgmgdhehcgmcacnhehfgogfcacnhicahdgjhegffpgngfhcghgffphagpgmgjgdhj'
epprds: :cl_rsh[151] return 0
:cl_cfg_sm_rt[906] [[ h != h ]]
:cl_cfg_sm_rt[919] RSCT_START_RETRIES=0
:cl_cfg_sm_rt[919] typeset -li RSCT_START_RETRIES
:cl_cfg_sm_rt[920] MIN_RSCT_RETRIES=1
:cl_cfg_sm_rt[920] typeset -li MIN_RSCT_RETRIES
:cl_cfg_sm_rt[921] MAX_RSCT_RETRIES=15
:cl_cfg_sm_rt[921] typeset -li MAX_RSCT_RETRIES
:cl_cfg_sm_rt[922] grep ^RSCT_START_RETRIES /etc/environment
:cl_cfg_sm_rt[922] eval
:cl_cfg_sm_rt[923] (( 0 < 1 ))
:cl_cfg_sm_rt[923] RSCT_START_RETRIES=1
:cl_cfg_sm_rt[924] (( 1 > 15 ))
:cl_cfg_sm_rt[926] RSCT_TB_WAITTIME=0
:cl_cfg_sm_rt[926] typeset -li RSCT_TB_WAITTIME
:cl_cfg_sm_rt[927] grep ^RSCT_TB_WAITTIME /etc/environment
:cl_cfg_sm_rt[927] eval
:cl_cfg_sm_rt[928] (( 0 <= 0 ))
:cl_cfg_sm_rt[928] RSCT_TB_WAITTIME=30
:cl_cfg_sm_rt[930] RSCT_START_WAIT=0
:cl_cfg_sm_rt[930] typeset -li RSCT_START_WAIT
:cl_cfg_sm_rt[931] MIN_RSCT_WAIT=10
:cl_cfg_sm_rt[931] typeset -li MIN_RSCT_WAIT
:cl_cfg_sm_rt[932] MAX_RSCT_WAIT=60
:cl_cfg_sm_rt[932] typeset -li MAX_RSCT_WAIT
:cl_cfg_sm_rt[933] grep ^RSCT_START_WAIT /etc/environment
:cl_cfg_sm_rt[933] eval
:cl_cfg_sm_rt[934] (( 0 < 10 ))
:cl_cfg_sm_rt[934] RSCT_START_WAIT=10
:cl_cfg_sm_rt[935] (( 10 > 60 ))
:cl_cfg_sm_rt[937] (( retries=0))
:cl_cfg_sm_rt[937] (( 0 < 1))
:cl_cfg_sm_rt[939] lsrsrc IBM.PeerNode
:cl_cfg_sm_rt[939] 1>> /var/hacmp/log/clutils.log 2>& 1
:cl_cfg_sm_rt[941] break
:cl_cfg_sm_rt[947] (( 0 >= 1 ))
:cl_cfg_sm_rt[954] : Configure RSCT in accordance with the specified or defaulted policies
:cl_cfg_sm_rt[955] : for Split
:cl_cfg_sm_rt[965] CT_MANAGEMENT_SCOPE=2
:cl_cfg_sm_rt[965] export CT_MANAGEMENT_SCOPE
:cl_cfg_sm_rt[966] lsrsrc -t -c -x IBM.PeerNode OpQuorumTieBreaker
:cl_cfg_sm_rt[966] Current_TB='"Success" '
:cl_cfg_sm_rt[967] Current_TB='"Success'
:cl_cfg_sm_rt[968] Current_TB=Success
:cl_cfg_sm_rt[969] [[ None == None ]]
:cl_cfg_sm_rt[971] [[ Success == Success ]]
:cl_cfg_sm_rt[973] chrsrc -c IBM.PeerNode OpQuorumTieBreaker=Operator
:cl_cfg_sm_rt[974] src_rc=0
:cl_cfg_sm_rt[975] (( 0 != 0 ))
:cl_cfg_sm_rt[981] (( 0 == 0 ))
:cl_cfg_sm_rt[983] chrsrc -s Name='="Success"' IBM.TieBreaker PostReserveWaitTime=30
:cl_cfg_sm_rt[984] src_rc=0
:cl_cfg_sm_rt[985] (( 0 != 0 ))
:cl_cfg_sm_rt[990] chrsrc -c IBM.PeerNode OpQuorumTieBreaker=Success
:cl_cfg_sm_rt[991] src_rc=0
:cl_cfg_sm_rt[992] (( 0 != 0 ))
:cl_cfg_sm_rt[1044] src_rc=0
:cl_cfg_sm_rt[1045] (( 0 != 0 ))
:cl_cfg_sm_rt[1053] : Configure RSCT Action
:cl_cfg_sm_rt[1055] chrsrc -c IBM.PeerNode QuorumType=4
:cl_cfg_sm_rt[1056] src_rc=0
:cl_cfg_sm_rt[1057] (( 0 != 0 ))
:cl_cfg_sm_rt[1064] chrsrc -c IBM.PeerNode CriticalMode=2
:cl_cfg_sm_rt[1065] src_rc=0
:cl_cfg_sm_rt[1066] (( 0 != 0 ))
:cl_cfg_sm_rt[1073] [[ Reboot == Reboot ]]
:cl_cfg_sm_rt[1075] chrsrc -c IBM.PeerNode CritRsrcProtMethod=1
:cl_cfg_sm_rt[1077] src_rc=0
:cl_cfg_sm_rt[1078] (( 0 != 0 ))
:cl_cfg_sm_rt[1086] : Configure RSCT Critical Resource Daemon Grace Period for cluster level.
:cl_cfg_sm_rt[1088] typeset grace_period
:cl_cfg_sm_rt[1089] clodmget -f crit_daemon_restart_grace_period HACMPcluster
:cl_cfg_sm_rt[1089] grace_period=60
:cl_cfg_sm_rt[1090] lsrsrc -c IBM.PeerNode
:cl_cfg_sm_rt[1090] LC_ALL=C
:cl_cfg_sm_rt[1090] grep CritDaemonRestartGracePeriod
:cl_cfg_sm_rt[1090] awk -F= '{print $2}'
:cl_cfg_sm_rt[1090] rsct_grace_period=' -1'
:cl_cfg_sm_rt[1091] [[ -n ' -1' ]]
:cl_cfg_sm_rt[1092] (( -1 != 60 ))
:cl_cfg_sm_rt[1093] chrsrc -c IBM.PeerNode CritDaemonRestartGracePeriod=60
:cl_cfg_sm_rt[1093] LC_ALL=C
:cl_cfg_sm_rt[1094] chrsrc_rc=0
:cl_cfg_sm_rt[1095] (( 0 != 0 ))
:cl_cfg_sm_rt[1104] : Configure RSCT Critical Resource Daemon Grace Period for node level.
:cl_cfg_sm_rt[1106] typeset node_grace_period
:cl_cfg_sm_rt[1107] typeset node_list
:cl_cfg_sm_rt[1108] typeset rsct_node_grace_period
:cl_cfg_sm_rt[1110] : Get the CAA active nodes list
:cl_cfg_sm_rt[1112] lscluster -m
:cl_cfg_sm_rt[1112] grep -p 'State of node: UP'
:cl_cfg_sm_rt[1112] cut -f2 -d:
:cl_cfg_sm_rt[1112] grep -w 'Node name:'
:cl_cfg_sm_rt[1112] node_list=$' epprda\n epprds'
:cl_cfg_sm_rt[1115] clodmget -n -q object='COMMUNICATION_PATH and value=epprda' -f name HACMPnode
:cl_cfg_sm_rt[1115] host_name=epprda
:cl_cfg_sm_rt[1116] clodmget -n -q object='CRIT_DAEMON_RESTART_GRACE_PERIOD and name=epprda' -f value HACMPnode
:cl_cfg_sm_rt[1116] node_grace_period=''
:cl_cfg_sm_rt[1117] [[ -n '' ]]
:cl_cfg_sm_rt[1115] clodmget -n -q object='COMMUNICATION_PATH and value=epprds' -f name HACMPnode
:cl_cfg_sm_rt[1115] host_name=epprds
:cl_cfg_sm_rt[1116] clodmget -n -q object='CRIT_DAEMON_RESTART_GRACE_PERIOD and name=epprds' -f value HACMPnode
:cl_cfg_sm_rt[1116] node_grace_period=''
:cl_cfg_sm_rt[1117] [[ -n '' ]]
:cl_cfg_sm_rt[1134] : Success exit. Display the CAA and RSCT configuration
:cl_cfg_sm_rt[1136] clctrl -tune -a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).communication_mode = u
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).config_timeout = 240
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).deadman_mode = a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).dr_enabled = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).link_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).local_merge_policy = h
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).network_fdt = 20000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).no_if_traffic_monitor = 0
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_down_delay = 10000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).packet_ttl = 32
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).remote_hb_factor = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).repos_mode = e
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).site_merge_policy = h
:cl_cfg_sm_rt[1137] lscluster -m
Calling node query for all nodes...
Node query number of nodes examined: 2
Node name: epprda
Cluster shorthand id for node: 1
UUID for node: f42873b8-9ee2-11ed-8018-fae6134ea920
State of node: UP NODE_LOCAL
Reason: NONE
Smoothed rtt to node: 0
Mean Deviation in network rtt to node: 0
Number of clusters node is a member in: 1
CLUSTER NAME SHID UUID
epprda_cluster 0 f43c91c2-9ee2-11ed-8018-fae6134ea920
SITE NAME SHID UUID
LOCAL 1 51735173-5173-5173-5173-517351735173
Points of contact for node: 0
----------------------------------------------------------------------------
Node name: epprds
Cluster shorthand id for node: 2
UUID for node: f42873fe-9ee2-11ed-8018-fae6134ea920
State of node: UP
Reason: NONE
Smoothed rtt to node: 154
Mean Deviation in network rtt to node: 225
Number of clusters node is a member in: 1
CLUSTER NAME SHID UUID
epprda_cluster 0 f43c91c2-9ee2-11ed-8018-fae6134ea920
SITE NAME SHID UUID
LOCAL 1 51735173-5173-5173-5173-517351735173
Points of contact for node: 1
-----------------------------------------------------------------------
Interface State Protocol Status SRC_IP->DST_IP
-----------------------------------------------------------------------
tcpsock->02 UP IPv4 none 61.81.244.134->61.81.244.123
:cl_cfg_sm_rt[1138] lsrsrc -x -A b IBM.PeerNode
resource 1:
Name = "epprds"
NodeList = {2}
RSCTVersion = "3.2.6.4"
ClassVersions = {}
CritRsrcProtMethod = 0
IsQuorumNode = 1
IsPreferredGSGL = 1
NodeUUID = "f42873fe-9ee2-11ed-8018-fae6134ea920"
HostName = "epprds"
TBPriority = 0
CritDaemonRestartGracePeriod = -1
ActivePeerDomain = "epprda_cluster"
NodeNameList = {"epprds"}
OpState = 1
ConfigChanged = 1
CritRsrcActive = 0
OpUsabilityState = 1
MaintenanceState = 0
resource 2:
Name = "epprda"
NodeList = {1}
RSCTVersion = "3.2.6.4"
ClassVersions = {}
CritRsrcProtMethod = 0
IsQuorumNode = 1
IsPreferredGSGL = 1
NodeUUID = "f42873b8-9ee2-11ed-8018-fae6134ea920"
HostName = "epprda"
TBPriority = 0
CritDaemonRestartGracePeriod = -1
ActivePeerDomain = "epprda_cluster"
NodeNameList = {"epprda"}
OpState = 1
ConfigChanged = 1
CritRsrcActive = 0
OpUsabilityState = 1
MaintenanceState = 0
:cl_cfg_sm_rt[1139] lsrsrc -x -c -A b IBM.PeerNode
resource 1:
CommittedRSCTVersion = "3.2.2.0"
ActiveVersionChanging = 0
OpQuorumOverride = 0
CritRsrcProtMethod = 1
OpQuorumTieBreaker = "Success"
QuorumType = 4
QuorumGroupName = ""
Fanout = 32
OpFenceGroup = ""
NodeCleanupCommand = ""
NodeCleanupCriteria = ""
QuorumLessStartupTimeout = 120
CriticalMode = 2
NotifyQuorumChangedCommand = ""
NamePolicy = 1
LiveUpdateOptions = ""
QuorumNotificationRespWaitTime = 0
MaintenanceModeConfig = ""
CritDaemonRestartGracePeriod = 60
:cl_cfg_sm_rt[1141] return 0
:cl_cfg_sm_rt[1] on_exit 0
:node_up[441] : exit status of cl_cfg_sm_rt is 0
:node_up[498] : Enable NFS crossmounts during manual start
:node_up[500] [[ -n false ]]
:node_up[500] [[ false == true ]]
:node_up[607] : When RG dependencies are not configured we call node_up_local/remote,
:node_up[608] : followed by process_resources to process any remaining groups
:node_up[610] [[ TRUE == FALSE ]]
:node_up[657] [[ epprda == epprda ]]
:node_up[660] : Perform any deferred TCP daemon startup, if necessary,
:node_up[661] : along with any necessary start up of iSCSI devices.
:node_up[663] cl_telinit
:cl_telinit[178] version=%I%
:cl_telinit[182] TELINIT_FILE=/usr/es/sbin/cluster/.telinit
:cl_telinit[183] USE_TELINIT_FILE=/usr/es/sbin/cluster/.use_telinit
:cl_telinit[185] [[ -f /usr/es/sbin/cluster/.use_telinit ]]
:cl_telinit[189] USE_TELINIT=0
:cl_telinit[198] [[ '' == -boot ]]
:cl_telinit[236] cl_lsitab clinit
:cl_telinit[236] 1> /dev/null 2>& 1
:cl_telinit[239] : telinit a disabled
:cl_telinit[241] return 0
:node_up[664] : exit status of cl_telinit is: 0
:node_up[667] return 0
Jan 28 2023 17:10:31 EVENT COMPLETED: node_up epprda 0
|2023-01-28T17:10:31|28697|EVENT COMPLETED: node_up epprda 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:10:31.539193
+ echo '|2023-01-28T17:10:31.539193|INFO: node_up|epprda|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 17:10:33 EVENT START: rg_move_fence epprda 1
|2023-01-28T17:10:33|28698|EVENT START: rg_move_fence epprda 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:10:33.744597
+ echo '|2023-01-28T17:10:33.744597|INFO: rg_move_fence|epprd_rg|epprda|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_fence[62] [[ high == high ]]
:rg_move_fence[62] version=1.11
:rg_move_fence[63] NODENAME=epprda
:rg_move_fence[63] export NODENAME
:rg_move_fence[65] set -u
:rg_move_fence[67] [ 2 != 2 ]
:rg_move_fence[73] set +u
:rg_move_fence[75] [[ -z TRUE ]]
:rg_move_fence[80] [[ TRUE == TRUE ]]
:rg_move_fence[82] LOCAL_NODENAME=epprda
:rg_move_fence[83] odmget -qid=1 HACMPgroup
:rg_move_fence[83] egrep 'group ='
:rg_move_fence[83] awk '{print $3}'
:rg_move_fence[83] eval RGNAME='"epprd_rg"'
:rg_move_fence[1] RGNAME=epprd_rg
+epprd_rg:rg_move_fence[84] GROUPNAME=epprd_rg
+epprd_rg:rg_move_fence[85] group_state='$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[86] set +u
+epprd_rg:rg_move_fence[87] eval print '$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[1] print
+epprd_rg:rg_move_fence[87] RG_MOVE_ONLINE=''
+epprd_rg:rg_move_fence[87] export RG_MOVE_ONLINE
+epprd_rg:rg_move_fence[88] set -u
+epprd_rg:rg_move_fence[89] RG_MOVE_ONLINE=TMP_ERROR
+epprd_rg:rg_move_fence[91] set -a
+epprd_rg:rg_move_fence[92] clsetenvgrp epprda rg_move epprd_rg ''
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
+epprd_rg:rg_move_fence[92] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[93] RC=0
+epprd_rg:rg_move_fence[94] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[1] FORCEDOWN_GROUPS=''
+epprd_rg:rg_move_fence[2] RESOURCE_GROUPS=''
+epprd_rg:rg_move_fence[3] HOMELESS_GROUPS=''
+epprd_rg:rg_move_fence[4] HOMELESS_FOLLOWER_GROUPS=''
+epprd_rg:rg_move_fence[5] ERRSTATE_GROUPS=''
+epprd_rg:rg_move_fence[6] PRINCIPAL_ACTIONS=''
+epprd_rg:rg_move_fence[7] ASSOCIATE_ACTIONS=''
+epprd_rg:rg_move_fence[8] AUXILLIARY_ACTIONS=''
+epprd_rg:rg_move_fence[8] SIBLING_GROUPS=''
+epprd_rg:rg_move_fence[9] SIBLING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[10] SIBLING_ACQUIRING_GROUPS=''
+epprd_rg:rg_move_fence[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[12] SIBLING_RELEASING_GROUPS=''
+epprd_rg:rg_move_fence[13] SIBLING_RELEASING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[95] set +a
+epprd_rg:rg_move_fence[96] [ 0 -ne 0 ]
+epprd_rg:rg_move_fence[103] process_resources FENCE
:rg_move_fence[3318] version=1.169
:rg_move_fence[3321] STATUS=0
:rg_move_fence[3322] sddsrv_off=FALSE
:rg_move_fence[3324] true
:rg_move_fence[3326] : call rgpa, and it will tell us what to do next
:rg_move_fence[3328] set -a
:rg_move_fence[3329] clRGPA FENCE
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa FENCE
2023-01-28T17:10:33.848077 clrgpa
:clRGPA[+55] exit 0
:rg_move_fence[3329] eval JOB_TYPE=NONE
:rg_move_fence[1] JOB_TYPE=NONE
:rg_move_fence[3330] RC=0
:rg_move_fence[3331] set +a
:rg_move_fence[3333] (( 0 != 0 ))
:rg_move_fence[3342] RESOURCE_GROUPS=''
:rg_move_fence[3343] GROUPNAME=''
:rg_move_fence[3343] export GROUPNAME
:rg_move_fence[3353] IS_SERVICE_START=1
:rg_move_fence[3354] IS_SERVICE_STOP=1
:rg_move_fence[3360] [[ NONE == RELEASE ]]
:rg_move_fence[3360] [[ NONE == ONLINE ]]
:rg_move_fence[3729] break
:rg_move_fence[3740] : If sddsrv was turned off above, turn it back on again
:rg_move_fence[3742] [[ FALSE == TRUE ]]
:rg_move_fence[3747] exit 0
+epprd_rg:rg_move_fence[104] : exit status of process_resources FENCE is: 0
+epprd_rg:rg_move_fence[107] [[ TRUE == TRUE ]]
+epprd_rg:rg_move_fence[109] export EVENT_TYPE
+epprd_rg:rg_move_fence[110] echo ACQUIRE_PRIMARY
ACQUIRE_PRIMARY
+epprd_rg:rg_move_fence[111] [[ -n '' ]]
+epprd_rg:rg_move_fence[141] exit 0
Jan 28 2023 17:10:33 EVENT COMPLETED: rg_move_fence epprda 1 0
|2023-01-28T17:10:33|28698|EVENT COMPLETED: rg_move_fence epprda 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:10:33.940449
+ echo '|2023-01-28T17:10:33.940449|INFO: rg_move_fence|epprd_rg|epprda|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 17:10:34 EVENT START: rg_move_acquire epprda 1
|2023-01-28T17:10:34|28698|EVENT START: rg_move_acquire epprda 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:10:34.134605
+ echo '|2023-01-28T17:10:34.134605|INFO: rg_move_acquire|epprd_rg|epprda|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_acquire[+54] [[ high == high ]]
:rg_move_acquire[+54] version=1.9.1.7
:rg_move_acquire[+57] set -u
:rg_move_acquire[+59] [ 2 != 2 ]
:rg_move_acquire[+65] set +u
:rg_move_acquire[+67] :rg_move_acquire[+67] clodmget -n -q id=1 -f group HACMPgroup
RG=epprd_rg
:rg_move_acquire[+68] export RG
:rg_move_acquire[+70] [[ ACQUIRE_PRIMARY == ACQUIRE_PRIMARY ]]
:rg_move_acquire[+75] typeset -i anhp_ret=0
:rg_move_acquire[+76] typeset -i scsi_ret=0
:rg_move_acquire[+78] clodmget -n -q policy = anhp -f value HACMPsplitmerge
:rg_move_acquire[+78] typeset ANHP_ENABLED=
:rg_move_acquire[+78] [[ == Yes ]]
:rg_move_acquire[+87] clodmget -n -q policy = scsi -f value HACMPsplitmerge
:rg_move_acquire[+87] typeset SCSIPR_ENABLED=
:rg_move_acquire[+87] [[ == Yes ]]
:rg_move_acquire[+106] (( 0 == 1 && 0 == 1 ))
:rg_move_acquire[+109] (( 0 == 1 && 0 == 0 ))
:rg_move_acquire[+112] (( 0 == 1 && 0 == 0 ))
:rg_move_acquire[+118] clcallev rg_move epprda 1 ACQUIRE
Jan 28 2023 17:10:34 EVENT START: rg_move epprda 1 ACQUIRE
|2023-01-28T17:10:34|28698|EVENT START: rg_move epprda 1 ACQUIRE|
:clevlog[amlog_trace:318] clcycle clavailability.log
:clevlog[amlog_trace:318] 1> /dev/null 2>& 1
:clevlog[amlog_trace:319] cltime
:clevlog[amlog_trace:319] DATE=2023-01-28T17:10:34.264066
:clevlog[amlog_trace:320] echo '|2023-01-28T17:10:34.264066|INFO: rg_move|epprd_rg|epprda|1|ACQUIRE'
:clevlog[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
:get_local_nodename[56] nodename=epprda
:get_local_nodename[57] rc=0
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 0 != 0 ))
:get_local_nodename[61] : If the node name in HACMPcluster matches a configured node, we are done.
:get_local_nodename[63] clnodename
:get_local_nodename[63] grep -w epprda
:get_local_nodename[63] [[ -n epprda ]]
:get_local_nodename[65] print -- epprda
:get_local_nodename[66] exit 0
:rg_move[76] version=%I%
:rg_move[86] STATUS=0
:rg_move[88] [[ ! -n '' ]]
:rg_move[90] EMULATE=REAL
:rg_move[96] set -u
:rg_move[98] NODENAME=epprda
:rg_move[98] export NODENAME
:rg_move[99] RGID=1
:rg_move[100] (( 3 == 3 ))
:rg_move[102] ACTION=ACQUIRE
:rg_move[108] : serial number for this event is 28698
:rg_move[112] RG_UP_POSTEVENT_ON_NODE=epprda
:rg_move[112] export RG_UP_POSTEVENT_ON_NODE
:rg_move[116] clodmget -qid=1 -f group -n HACMPgroup
:rg_move[116] eval RGNAME=epprd_rg
:rg_move[1] RGNAME=epprd_rg
:rg_move[118] UPDATESTATD=0
:rg_move[119] export UPDATESTATD
:rg_move[123] RG_MOVE_EVENT=true
:rg_move[123] export RG_MOVE_EVENT
:rg_move[128] group_state='$RESGRP_epprd_rg_epprda'
:rg_move[129] set +u
:rg_move[130] eval print '$RESGRP_epprd_rg_epprda'
:rg_move[1] print
:rg_move[130] RG_MOVE_ONLINE=''
:rg_move[130] export RG_MOVE_ONLINE
:rg_move[131] set -u
:rg_move[132] RG_MOVE_ONLINE=TMP_ERROR
:rg_move[139] rm -f /tmp/.NFSSTOPPED
:rg_move[140] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[147] set -a
:rg_move[148] clsetenvgrp epprda rg_move epprd_rg
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
:rg_move[148] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[149] RC=0
:rg_move[150] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[1] FORCEDOWN_GROUPS=''
:rg_move[2] RESOURCE_GROUPS=''
:rg_move[3] HOMELESS_GROUPS=''
:rg_move[4] HOMELESS_FOLLOWER_GROUPS=''
:rg_move[5] ERRSTATE_GROUPS=''
:rg_move[6] PRINCIPAL_ACTIONS=''
:rg_move[7] ASSOCIATE_ACTIONS=''
:rg_move[8] AUXILLIARY_ACTIONS=''
:rg_move[8] SIBLING_GROUPS=''
:rg_move[9] SIBLING_NODES_BY_GROUP=''
:rg_move[10] SIBLING_ACQUIRING_GROUPS=''
:rg_move[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
:rg_move[12] SIBLING_RELEASING_GROUPS=''
:rg_move[13] SIBLING_RELEASING_NODES_BY_GROUP=''
:rg_move[151] set +a
:rg_move[155] (( 0 != 0 ))
:rg_move[155] [[ -z epprd_rg ]]
:rg_move[164] [[ -z TRUE ]]
:rg_move[241] AM_SYNC_CALLED_BY=RG_MOVE
:rg_move[241] export AM_SYNC_CALLED_BY
:rg_move[242] process_resources
:process_resources[3318] version=1.169
:process_resources[3321] STATUS=0
:process_resources[3322] sddsrv_off=FALSE
:process_resources[3324] true
:process_resources[3326] : call rgpa, and it will tell us what to do next
:process_resources[3328] set -a
:process_resources[3329] clRGPA
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa
2023-01-28T17:10:34.384412 clrgpa
:clRGPA[+55] exit 0
:process_resources[3329] eval JOB_TYPE=ACQUIRE RESOURCE_GROUPS='"epprd_rg"' PRINCIPAL_ACTION='"ACQUIRE"' AUXILLIARY_ACTION='"NONE"'
:process_resources[1] JOB_TYPE=ACQUIRE
:process_resources[1] RESOURCE_GROUPS=epprd_rg
:process_resources[1] PRINCIPAL_ACTION=ACQUIRE
:process_resources[1] AUXILLIARY_ACTION=NONE
:process_resources[3330] RC=0
:process_resources[3331] set +a
:process_resources[3333] (( 0 != 0 ))
:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[3360] [[ ACQUIRE == ONLINE ]]
+epprd_rg:process_resources[3652] set_resource_group_state ACQUIRING
+epprd_rg:process_resources[set_resource_group_state:82] PS4_FUNC=set_resource_group_state
+epprd_rg:process_resources[set_resource_group_state:82] typeset PS4_FUNC
+epprd_rg:process_resources[set_resource_group_state:83] [[ high == high ]]
+epprd_rg:process_resources[set_resource_group_state:83] set -x
+epprd_rg:process_resources[set_resource_group_state:84] STAT=0
+epprd_rg:process_resources[set_resource_group_state:85] new_status=ACQUIRING
+epprd_rg:process_resources[set_resource_group_state:89] export GROUPNAME
+epprd_rg:process_resources[set_resource_group_state:90] [[ ACQUIRING != DOWN ]]
+epprd_rg:process_resources[set_resource_group_state:92] clchdaemons -d clstrmgr_scripts -t resource_locator -n epprda -o epprd_rg -v ACQUIRING
+epprd_rg:process_resources[set_resource_group_state:100] : Resource Manager Updates
+epprd_rg:process_resources[set_resource_group_state:105] amlog_trace '' 'acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:318] clcycle clavailability.log
+epprd_rg:process_resources[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:process_resources[amlog_trace:319] cltime
+epprd_rg:process_resources[amlog_trace:319] DATE=2023-01-28T17:10:34.418340
+epprd_rg:process_resources[amlog_trace:320] echo '|2023-01-28T17:10:34.418340|INFO: acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:process_resources[set_resource_group_state:106] cl_RMupdate acquiring epprd_rg process_resources
2023-01-28T17:10:34.441217
2023-01-28T17:10:34.445821
+epprd_rg:process_resources[set_resource_group_state:153] return 0
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:34.458184 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=WPAR ACTION=ACQUIRE RESOURCE_GROUPS='"epprd_rg' '"'
+epprd_rg:process_resources[1] JOB_TYPE=WPAR
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ WPAR == RELEASE ]]
+epprd_rg:process_resources[3360] [[ WPAR == ONLINE ]]
+epprd_rg:process_resources[3492] process_wpars ACQUIRE
+epprd_rg:process_resources[process_wpars:3265] PS4_FUNC=process_wpars
+epprd_rg:process_resources[process_wpars:3265] typeset PS4_FUNC
+epprd_rg:process_resources[process_wpars:3266] [[ high == high ]]
+epprd_rg:process_resources[process_wpars:3266] set -x
+epprd_rg:process_resources[process_wpars:3267] STAT=0
+epprd_rg:process_resources[process_wpars:3268] action=ACQUIRE
+epprd_rg:process_resources[process_wpars:3268] typeset action
+epprd_rg:process_resources[process_wpars:3272] export GROUPNAME
+epprd_rg:process_resources[process_wpars:3275] clstart_wpar
+epprd_rg:clstart_wpar[180] version=1.12.1.1
+epprd_rg:clstart_wpar[184] [[ rg_move == reconfig_resource_acquire ]]
+epprd_rg:clstart_wpar[184] [[ ACQUIRE_PRIMARY == reconfig_resource_acquire ]]
+epprd_rg:clstart_wpar[193] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clstart_wpar[193] [[ -z '' ]]
+epprd_rg:clstart_wpar[193] exit 0
+epprd_rg:process_resources[process_wpars:3276] RC=0
+epprd_rg:process_resources[process_wpars:3285] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[process_wpars:3294] return 0
+epprd_rg:process_resources[3493] RC=0
+epprd_rg:process_resources[3495] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:34.489575 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=SERVICE_LABELS ACTION=ACQUIRE IP_LABELS='"epprd"' RESOURCE_GROUPS='"epprd_rg' '"' COMMUNICATION_LINKS='""'
+epprd_rg:process_resources[1] JOB_TYPE=SERVICE_LABELS
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] IP_LABELS=epprd
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] COMMUNICATION_LINKS=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ SERVICE_LABELS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ SERVICE_LABELS == ONLINE ]]
+epprd_rg:process_resources[3407] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[3409] acquire_service_labels
+epprd_rg:process_resources[acquire_service_labels:3083] PS4_FUNC=acquire_service_labels
+epprd_rg:process_resources[acquire_service_labels:3083] typeset PS4_FUNC
+epprd_rg:process_resources[acquire_service_labels:3084] [[ high == high ]]
+epprd_rg:process_resources[acquire_service_labels:3084] set -x
+epprd_rg:process_resources[acquire_service_labels:3085] STAT=0
+epprd_rg:process_resources[acquire_service_labels:3086] clcallev acquire_service_addr
Jan 28 2023 17:10:34 EVENT START: acquire_service_addr
|2023-01-28T17:10:34|28698|EVENT START: acquire_service_addr |
+epprd_rg:acquire_service_addr[416] version=1.74.1.5
+epprd_rg:acquire_service_addr[423] [[ SERVICE_LABELS != 0 ]]
+epprd_rg:acquire_service_addr[423] [[ SERVICE_LABELS != GROUP ]]
+epprd_rg:acquire_service_addr[424] PROC_RES=true
+epprd_rg:acquire_service_addr[440] saveNSORDER=UNDEFINED
+epprd_rg:acquire_service_addr[441] NSORDER=local
+epprd_rg:acquire_service_addr[442] export NSORDER
+epprd_rg:acquire_service_addr[445] cl_RMupdate resource_acquiring All_service_addrs acquire_service_addr
2023-01-28T17:10:34.568976
2023-01-28T17:10:34.573398
+epprd_rg:acquire_service_addr[452] export GROUPNAME
+epprd_rg:acquire_service_addr[458] [[ true == true ]]
+epprd_rg:acquire_service_addr[459] get_list_head epprd
+epprd_rg:acquire_service_addr[459] read SERVICELABELS
+epprd_rg:acquire_service_addr[460] get_list_tail epprd
+epprd_rg:acquire_service_addr[460] read IP_LABELS
+epprd_rg:acquire_service_addr[471] clgetif -a epprd
+epprd_rg:acquire_service_addr[471] 2> /dev/null
+epprd_rg:acquire_service_addr[472] (( 3 != 0 ))
+epprd_rg:acquire_service_addr[477] cllsif -J '~' -Sn epprd
+epprd_rg:acquire_service_addr[477] uniq
+epprd_rg:acquire_service_addr[477] cut -d~ -f3
+epprd_rg:acquire_service_addr[477] NETWORK=net_ether_01
+epprd_rg:acquire_service_addr[478] cllsif -J '~' -Si epprda
+epprd_rg:acquire_service_addr[478] sort
+epprd_rg:acquire_service_addr[478] awk -F~ -v NET=net_ether_01 '{if ($2 == "boot" && $3 == NET) print $1}'
+epprd_rg:acquire_service_addr[478] boot_list=epprda
+epprd_rg:acquire_service_addr[480] [[ -z epprda ]]
+epprd_rg:acquire_service_addr[492] best_boot_addr net_ether_01 epprda
+epprd_rg:acquire_service_addr[best_boot_addr:106] NETWORK=net_ether_01
+epprd_rg:acquire_service_addr[best_boot_addr:106] typeset NETWORK
+epprd_rg:acquire_service_addr[best_boot_addr:107] shift
+epprd_rg:acquire_service_addr[best_boot_addr:108] candidate_boots=epprda
+epprd_rg:acquire_service_addr[best_boot_addr:108] typeset candidate_boots
+epprd_rg:acquire_service_addr[best_boot_addr:112] echo epprda
+epprd_rg:acquire_service_addr[best_boot_addr:112] wc -l
+epprd_rg:acquire_service_addr[best_boot_addr:112] tr ' ' '\n'
+epprd_rg:acquire_service_addr[best_boot_addr:112] num_candidates=' 1'
+epprd_rg:acquire_service_addr[best_boot_addr:112] typeset -li num_candidates
+epprd_rg:acquire_service_addr[best_boot_addr:113] (( 1 == 1 ))
+epprd_rg:acquire_service_addr[best_boot_addr:114] echo epprda
+epprd_rg:acquire_service_addr[best_boot_addr:115] return
+epprd_rg:acquire_service_addr[492] boot_addr=epprda
+epprd_rg:acquire_service_addr[493] (( 0 != 0 ))
+epprd_rg:acquire_service_addr[505] clgetif -a epprda
+epprd_rg:acquire_service_addr[505] cut -f1
+epprd_rg:acquire_service_addr[505] 2> /dev/null
+epprd_rg:acquire_service_addr[505] INTERFACE='en0 '
+epprd_rg:acquire_service_addr[507] cllsif -J '~' -Sn epprda
+epprd_rg:acquire_service_addr[507] cut -f7,9 -d~
+epprd_rg:acquire_service_addr[508] read boot_dot_addr INTERFACE
+epprd_rg:acquire_service_addr[508] IFS='~'
+epprd_rg:acquire_service_addr[510] [[ -z en0 ]]
+epprd_rg:acquire_service_addr[527] cllsif -J '~' -Sn epprd
+epprd_rg:acquire_service_addr[527] cut -f7,11,15 -d~
+epprd_rg:acquire_service_addr[527] uniq
+epprd_rg:acquire_service_addr[528] read service_dot_addr NETMASK INET_FAMILY
+epprd_rg:acquire_service_addr[528] IFS='~'
+epprd_rg:acquire_service_addr[530] [[ AF_INET == AF_INET6 ]]
+epprd_rg:acquire_service_addr[534] cl_swap_IP_address rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0
+epprd_rg:cl_swap_IP_address[462] version=1.9.14.8
+epprd_rg:cl_swap_IP_address[464] cl_get_path -S
+epprd_rg:cl_swap_IP_address[464] OP_SEP='~'
+epprd_rg:cl_swap_IP_address[465] LC_ALL=C
+epprd_rg:cl_swap_IP_address[465] export LC_ALL
+epprd_rg:cl_swap_IP_address[466] RESTORE_ROUTES=/usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[468] cl_echo 33 'Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0' /usr/es/sbin/cluster/events/utils/cl_swap_IP_address 'rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0'
Jan 28 2023 17:10:34Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0+epprd_rg:cl_swap_IP_address[470] typeset -i oslevel
+epprd_rg:cl_swap_IP_address[471] /usr/bin/sed s/-//g
+epprd_rg:cl_swap_IP_address[471] /usr/bin/oslevel -r
+epprd_rg:cl_swap_IP_address[471] oslevel=720005
+epprd_rg:cl_swap_IP_address[476] [[ 6 == 6 ]]
+epprd_rg:cl_swap_IP_address[477] [[ 6 == 7 ]]
+epprd_rg:cl_swap_IP_address[484] no -a
+epprd_rg:cl_swap_IP_address[484] awk '{ print $3 }'
+epprd_rg:cl_swap_IP_address[484] grep ipignoreredirects
+epprd_rg:cl_swap_IP_address[484] PRIOR_IPIGNORE_REDIRECTS_VALUE=0
+epprd_rg:cl_swap_IP_address[485] /usr/sbin/no -o ipignoreredirects=1
Setting ipignoreredirects to 1
+epprd_rg:cl_swap_IP_address[490] PROC_RES=false
+epprd_rg:cl_swap_IP_address[491] [[ SERVICE_LABELS != 0 ]]
+epprd_rg:cl_swap_IP_address[491] [[ SERVICE_LABELS != GROUP ]]
+epprd_rg:cl_swap_IP_address[492] PROC_RES=true
+epprd_rg:cl_swap_IP_address[495] set -u
+epprd_rg:cl_swap_IP_address[497] RC=0
+epprd_rg:cl_swap_IP_address[504] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 fa.e6.13.4e.a9.20 183694220 0 60659647 0 0
en0 1500 61.81.244 61.81.244.134 183694220 0 60659647 0 0
lo0 16896 link#1 33645546 0 33645546 0 0
lo0 16896 127 127.0.0.1 33645546 0 33645546 0 0
lo0 16896 ::1%1 33645546 0 33645546 0 0
+epprd_rg:cl_swap_IP_address[505] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
default 61.81.244.1 UG 1 - en0 0 0
61.81.244.0 61.81.244.134 UHSb 1 - en0 0 0 =>
61.81.244/24 61.81.244.134 U 1 - en0 0 0
61.81.244.134 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.255 61.81.244.134 UHSb 1 - en0 0 0
127/8 127.0.0.1 U 1 - lo0 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+epprd_rg:cl_swap_IP_address[506] CASC_OR_ROT=rotating
+epprd_rg:cl_swap_IP_address[507] ACQ_OR_RLSE=acquire
+epprd_rg:cl_swap_IP_address[508] IF=en0
+epprd_rg:cl_swap_IP_address[509] ADDR=61.81.244.156
+epprd_rg:cl_swap_IP_address[510] OLD_ADDR=61.81.244.134
+epprd_rg:cl_swap_IP_address[511] NETMASK=255.255.255.0
+epprd_rg:cl_swap_IP_address[514] [[ rotating == cascading ]]
+epprd_rg:cl_swap_IP_address[525] cut -f3 -d~
+epprd_rg:cl_swap_IP_address[525] cllsif -J '~' -Sw -n 61.81.244.156
+epprd_rg:cl_swap_IP_address[525] NET=net_ether_01
+epprd_rg:cl_swap_IP_address[528] clodmget -qidentifier=61.81.244.156 -f max_aliases -n HACMPadapter
+epprd_rg:cl_swap_IP_address[528] ALIAS_FIRST=0
+epprd_rg:cl_swap_IP_address[529] grep -c -w inet
+epprd_rg:cl_swap_IP_address[529] ifconfig en0
+epprd_rg:cl_swap_IP_address[529] LC_ALL=C
+epprd_rg:cl_swap_IP_address[529] NUM_ADDRS=1
+epprd_rg:cl_swap_IP_address[530] [[ acquire == acquire ]]
+epprd_rg:cl_swap_IP_address[533] amlog_trace '' 'Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_swap_IP_address[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_swap_IP_address[amlog_trace:319] cltime
+epprd_rg:cl_swap_IP_address[amlog_trace:319] DATE=2023-01-28T17:10:34.796215
+epprd_rg:cl_swap_IP_address[amlog_trace:320] echo '|2023-01-28T17:10:34.796215|INFO: Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_swap_IP_address[535] cl_echo 7310 'cl_swap_IP_address: Configuring network interface en0 with aliased IP address 61.81.244.156' cl_swap_IP_address en0 61.81.244.156
Jan 28 2023 17:10:34cl_swap_IP_address: Configuring network interface en0 with aliased IP address 61.81.244.156+epprd_rg:cl_swap_IP_address[546] (( 1 > 1 ))
+epprd_rg:cl_swap_IP_address[550] clifconfig en0 alias 61.81.244.156 netmask 255.255.255.0 firstalias
+epprd_rg:clifconfig[117] version=1.9
+epprd_rg:clifconfig[121] set -A args en0 alias 61.81.244.156 netmask 255.255.255.0 firstalias
+epprd_rg:clifconfig[124] interface=en0
+epprd_rg:clifconfig[125] shift
+epprd_rg:clifconfig[127] [[ -n alias ]]
+epprd_rg:clifconfig[129] alias_val=1
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n 61.81.244.156 ]]
+epprd_rg:clifconfig[147] params=' address=61.81.244.156'
+epprd_rg:clifconfig[147] addr=61.81.244.156
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n netmask ]]
+epprd_rg:clifconfig[149] params=' address=61.81.244.156 netmask=255.255.255.0'
+epprd_rg:clifconfig[149] shift
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n firstalias ]]
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n '' ]]
+epprd_rg:clifconfig[174] [[ -n 1 ]]
+epprd_rg:clifconfig[174] [[ -n epprd_rg ]]
+epprd_rg:clifconfig[175] clwparname epprd_rg
+epprd_rg:clwparname[38] version=1.3.1.1
+epprd_rg:clwparname[44] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clwparname[44] [[ -z '' ]]
+epprd_rg:clwparname[44] exit 0
+epprd_rg:clifconfig[175] WPARNAME=''
+epprd_rg:clifconfig[176] (( 0 == 0 ))
+epprd_rg:clifconfig[176] [[ -n '' ]]
+epprd_rg:clifconfig[218] belongs_to_an_active_wpar 61.81.244.156
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] [[ -z '' ]]
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] return 1
+epprd_rg:clifconfig[218] read wpar_name wpar_if wpar_netmask wpar_broadcast
+epprd_rg:clifconfig[218] IFS='~'
+epprd_rg:clifconfig[219] rc=1
+epprd_rg:clifconfig[221] [[ 1 == 0 ]]
+epprd_rg:clifconfig[275] ifconfig en0 alias 61.81.244.156 netmask 255.255.255.0 firstalias
+epprd_rg:cl_swap_IP_address[584] hats_adapter_notify en0 -e 61.81.244.156 alias
2023-01-28T17:10:34.848377 hats_adapter_notify
2023-01-28T17:10:34.849275 hats_adapter_notify
+epprd_rg:cl_swap_IP_address[587] check_alias_status en0 61.81.244.156 acquire
+epprd_rg:cl_swap_IP_address[check_alias_status:108] CH_INTERFACE=en0
+epprd_rg:cl_swap_IP_address[check_alias_status:109] CH_ADDRESS=61.81.244.156
+epprd_rg:cl_swap_IP_address[check_alias_status:110] CH_ACQ_OR_RLSE=acquire
+epprd_rg:cl_swap_IP_address[check_alias_status:118] IF_IB=en0
+epprd_rg:cl_swap_IP_address[check_alias_status:120] echo en0
+epprd_rg:cl_swap_IP_address[check_alias_status:120] awk '{print index($0, "ib")}'
+epprd_rg:cl_swap_IP_address[check_alias_status:120] IS_IB=0
+epprd_rg:cl_swap_IP_address[check_alias_status:122] [[ 0 != 1 ]]
+epprd_rg:cl_swap_IP_address[check_alias_status:124] clifconfig en0
+epprd_rg:cl_swap_IP_address[check_alias_status:124] awk '{print $2}'
+epprd_rg:cl_swap_IP_address[check_alias_status:124] fgrep -w 61.81.244.156
+epprd_rg:clifconfig[117] version=1.9
+epprd_rg:clifconfig[121] set -A args en0
+epprd_rg:clifconfig[124] interface=en0
+epprd_rg:clifconfig[125] shift
+epprd_rg:clifconfig[127] [[ -n '' ]]
+epprd_rg:clifconfig[174] [[ -n '' ]]
+epprd_rg:clifconfig[218] belongs_to_an_active_wpar
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] [[ -z '' ]]
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] return 1
+epprd_rg:clifconfig[218] read wpar_name wpar_if wpar_netmask wpar_broadcast
+epprd_rg:clifconfig[218] IFS='~'
+epprd_rg:clifconfig[219] rc=1
+epprd_rg:clifconfig[221] [[ 1 == 0 ]]
+epprd_rg:clifconfig[275] ifconfig en0
+epprd_rg:cl_swap_IP_address[check_alias_status:124] ADDR=61.81.244.156
+epprd_rg:cl_swap_IP_address[check_alias_status:129] [ acquire = acquire ]
+epprd_rg:cl_swap_IP_address[check_alias_status:133] [[ 61.81.244.156 != 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[check_alias_status:144] return 0
+epprd_rg:cl_swap_IP_address[588] RC=0
+epprd_rg:cl_swap_IP_address[590] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[594] amlog_trace '' 'Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_swap_IP_address[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_swap_IP_address[amlog_trace:319] cltime
+epprd_rg:cl_swap_IP_address[amlog_trace:319] DATE=2023-01-28T17:10:34.904230
+epprd_rg:cl_swap_IP_address[amlog_trace:320] echo '|2023-01-28T17:10:34.904230|INFO: Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_swap_IP_address[701] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[714] flush_arp
+epprd_rg:cl_swap_IP_address[flush_arp:49] arp -an
+epprd_rg:cl_swap_IP_address[flush_arp:49] grep '\?'
+epprd_rg:cl_swap_IP_address[flush_arp:49] tr -d '()'
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.27
61.81.244.27 (61.81.244.27) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.30
61.81.244.30 (61.81.244.30) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.31
61.81.244.31 (61.81.244.31) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.33
61.81.244.33 (61.81.244.33) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.35
61.81.244.35 (61.81.244.35) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.40
61.81.244.40 (61.81.244.40) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.41
61.81.244.41 (61.81.244.41) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.191
61.81.244.191 (61.81.244.191) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.192
61.81.244.192 (61.81.244.192) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.193
61.81.244.193 (61.81.244.193) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.194
61.81.244.194 (61.81.244.194) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.201
61.81.244.201 (61.81.244.201) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.202
61.81.244.202 (61.81.244.202) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.53
61.81.244.53 (61.81.244.53) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.54
61.81.244.54 (61.81.244.54) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.204
61.81.244.204 (61.81.244.204) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.55
61.81.244.55 (61.81.244.55) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.56
61.81.244.56 (61.81.244.56) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.205
61.81.244.205 (61.81.244.205) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.57
61.81.244.57 (61.81.244.57) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.206
61.81.244.206 (61.81.244.206) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.58
61.81.244.58 (61.81.244.58) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.207
61.81.244.207 (61.81.244.207) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.59
61.81.244.59 (61.81.244.59) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.208
61.81.244.208 (61.81.244.208) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.60
61.81.244.60 (61.81.244.60) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.209
61.81.244.209 (61.81.244.209) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.210
61.81.244.210 (61.81.244.210) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.211
61.81.244.211 (61.81.244.211) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.212
61.81.244.212 (61.81.244.212) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.213
61.81.244.213 (61.81.244.213) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.215
61.81.244.215 (61.81.244.215) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.216
61.81.244.216 (61.81.244.216) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.217
61.81.244.217 (61.81.244.217) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.220
61.81.244.220 (61.81.244.220) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.221
61.81.244.221 (61.81.244.221) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.224
61.81.244.224 (61.81.244.224) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.227
61.81.244.227 (61.81.244.227) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.228
61.81.244.228 (61.81.244.228) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.229
61.81.244.229 (61.81.244.229) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.230
61.81.244.230 (61.81.244.230) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.231
61.81.244.231 (61.81.244.231) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.232
61.81.244.232 (61.81.244.232) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.233
61.81.244.233 (61.81.244.233) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.234
61.81.244.234 (61.81.244.234) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.236
61.81.244.236 (61.81.244.236) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.237
61.81.244.237 (61.81.244.237) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.239
61.81.244.239 (61.81.244.239) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.122
61.81.244.122 (61.81.244.122) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.123
61.81.244.123 (61.81.244.123) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.125
61.81.244.125 (61.81.244.125) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.127
61.81.244.127 (61.81.244.127) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.128
61.81.244.128 (61.81.244.128) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.129
61.81.244.129 (61.81.244.129) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.130
61.81.244.130 (61.81.244.130) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.131
61.81.244.131 (61.81.244.131) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.133
61.81.244.133 (61.81.244.133) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.135
61.81.244.135 (61.81.244.135) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.136
61.81.244.136 (61.81.244.136) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.138
61.81.244.138 (61.81.244.138) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.139
61.81.244.139 (61.81.244.139) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.141
61.81.244.141 (61.81.244.141) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.142
61.81.244.142 (61.81.244.142) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.143
61.81.244.143 (61.81.244.143) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.144
61.81.244.144 (61.81.244.144) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.146
61.81.244.146 (61.81.244.146) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.147
61.81.244.147 (61.81.244.147) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.149
61.81.244.149 (61.81.244.149) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.1
61.81.244.1 (61.81.244.1) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.151
61.81.244.151 (61.81.244.151) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.152
61.81.244.152 (61.81.244.152) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.153
61.81.244.153 (61.81.244.153) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.157
61.81.244.157 (61.81.244.157) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.158
61.81.244.158 (61.81.244.158) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.160
61.81.244.160 (61.81.244.160) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.161
61.81.244.161 (61.81.244.161) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.162
61.81.244.162 (61.81.244.162) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.163
61.81.244.163 (61.81.244.163) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.17
61.81.244.17 (61.81.244.17) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.19
61.81.244.19 (61.81.244.19) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:52] return 0
+epprd_rg:cl_swap_IP_address[716] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 fa.e6.13.4e.a9.20 183694316 0 60659773 0 0
en0 1500 61.81.244 61.81.244.156 183694316 0 60659773 0 0
en0 1500 61.81.244 61.81.244.134 183694316 0 60659773 0 0
lo0 16896 link#1 33645560 0 33645560 0 0
lo0 16896 127 127.0.0.1 33645560 0 33645560 0 0
lo0 16896 ::1%1 33645560 0 33645560 0 0
+epprd_rg:cl_swap_IP_address[717] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
default 61.81.244.1 UG 1 - en0 0 0
61.81.244.0 61.81.244.156 UHSb 1 - en0 0 0 =>
61.81.244/24 61.81.244.156 U 1 - en0 0 0
61.81.244.134 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.156 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.255 61.81.244.156 UHSb 1 - en0 0 0
127/8 127.0.0.1 U 1 - lo0 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+epprd_rg:cl_swap_IP_address[989] no -o ipignoreredirects=0
Setting ipignoreredirects to 0
+epprd_rg:cl_swap_IP_address[992] cl_echo 32 'Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0. Exit status = 0' /usr/es/sbin/cluster/events/utils/cl_swap_IP_address 'rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0' 0
Jan 28 2023 17:10:35Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0. Exit status = 0+epprd_rg:cl_swap_IP_address[994] date
Sat Jan 28 17:10:35 KORST 2023
+epprd_rg:cl_swap_IP_address[996] exit 0
+epprd_rg:acquire_service_addr[537] RC=0
+epprd_rg:acquire_service_addr[539] (( 0 != 0 ))
+epprd_rg:acquire_service_addr[549] [[ true == false ]]
+epprd_rg:acquire_service_addr[560] cl_RMupdate resource_up All_nonerror_service_addrs acquire_service_addr
2023-01-28T17:10:35.127628
2023-01-28T17:10:35.132044
+epprd_rg:acquire_service_addr[565] [[ UNDEFINED != UNDEFINED ]]
+epprd_rg:acquire_service_addr[568] NSORDER=''
+epprd_rg:acquire_service_addr[568] export NSORDER
+epprd_rg:acquire_service_addr[571] [[ true == false ]]
+epprd_rg:acquire_service_addr[579] exit 0
Jan 28 2023 17:10:35 EVENT COMPLETED: acquire_service_addr 0
|2023-01-28T17:10:35|28698|EVENT COMPLETED: acquire_service_addr 0|
+epprd_rg:process_resources[acquire_service_labels:3087] RC=0
+epprd_rg:process_resources[acquire_service_labels:3089] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[acquire_service_labels:3104] (( 0 != 0 ))
+epprd_rg:process_resources[acquire_service_labels:3110] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
+epprd_rg:process_resources[acquire_service_labels:3112] return 0
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:35.207722 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=DISKS ACTION=ACQUIRE HDISKS='"hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8"' RESOURCE_GROUPS='"epprd_rg' '"' VOLUME_GROUPS='"datavg,datavg,datavg,datavg,datavg,datavg,datavg"'
+epprd_rg:process_resources[1] JOB_TYPE=DISKS
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] HDISKS=hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] VOLUME_GROUPS=datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ DISKS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ DISKS == ONLINE ]]
+epprd_rg:process_resources[3439] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[3441] FAILED_RR_RGS=''
+epprd_rg:process_resources[3442] get_disks_main
+epprd_rg:process_resources[get_disks_main:981] PS4_FUNC=get_disks_main
+epprd_rg:process_resources[get_disks_main:981] typeset PS4_FUNC
+epprd_rg:process_resources[get_disks_main:982] [[ high == high ]]
+epprd_rg:process_resources[get_disks_main:982] set -x
+epprd_rg:process_resources[get_disks_main:983] SKIPBRKRES=0
+epprd_rg:process_resources[get_disks_main:983] typeset -li SKIPBRKRES
+epprd_rg:process_resources[get_disks_main:984] STAT=0
+epprd_rg:process_resources[get_disks_main:985] FAILURE_IN_METHOD=0
+epprd_rg:process_resources[get_disks_main:985] typeset -li FAILURE_IN_METHOD
+epprd_rg:process_resources[get_disks_main:986] LIST_OF_FAILED_RGS=''
+epprd_rg:process_resources[get_disks_main:989] : Below are the list of resources as generated by clrgpa
+epprd_rg:process_resources[get_disks_main:991] RG_LIST=epprd_rg
+epprd_rg:process_resources[get_disks_main:992] RDISK_LIST=''
+epprd_rg:process_resources[get_disks_main:993] DISK_LIST=hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8
+epprd_rg:process_resources[get_disks_main:994] VG_LIST=datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[get_disks_main:997] : Resource groups are processed individually. This is required because
+epprd_rg:process_resources[get_disks_main:998] : the replication mechanism may differ between resource groups.
+epprd_rg:process_resources[get_disks_main:1002] getReplicatedResources epprd_rg
+epprd_rg:process_resources[getReplicatedResources:699] PS4_FUNC=getReplicatedResources
+epprd_rg:process_resources[getReplicatedResources:699] typeset PS4_FUNC
+epprd_rg:process_resources[getReplicatedResources:700] [[ high == high ]]
+epprd_rg:process_resources[getReplicatedResources:700] set -x
+epprd_rg:process_resources[getReplicatedResources:702] RV=false
+epprd_rg:process_resources[getReplicatedResources:704] clodmget -n -f type HACMPrresmethods
+epprd_rg:process_resources[getReplicatedResources:704] [[ -n 9 ]]
+epprd_rg:process_resources[getReplicatedResources:707] : Replicated resource methods are defined, check for resources
+epprd_rg:process_resources[getReplicatedResources:709] clodmget -q $'name like \'*_REP_RESOURCE\' AND group=epprd_rg' -f value -n HACMPresource
+epprd_rg:process_resources[getReplicatedResources:709] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:718] : Verify if any backup profiles are configured and trigger cbm utilities based on that
+epprd_rg:process_resources[getReplicatedResources:720] clodmget -q name=BACKUP_ENABLED -f value HACMPresource
+epprd_rg:process_resources[getReplicatedResources:720] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:739] echo false
+epprd_rg:process_resources[get_disks_main:1002] REPLICATED_RESOURCES=false
+epprd_rg:process_resources[get_disks_main:1005] : Break out the resources for resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1007] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[get_disks_main:1008] VOLUME_GROUPS=''
+epprd_rg:process_resources[get_disks_main:1009] HDISKS=''
+epprd_rg:process_resources[get_disks_main:1010] RHDISKS=''
+epprd_rg:process_resources[get_disks_main:1011] RDISK_LIST=''
+epprd_rg:process_resources[get_disks_main:1014] : Get the volume groups in resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1016] print datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[get_disks_main:1016] read VOLUME_GROUPS VG_LIST
+epprd_rg:process_resources[get_disks_main:1016] IFS=:
+epprd_rg:process_resources[get_disks_main:1018] : Removing duplicate entries in VG list.
+epprd_rg:process_resources[get_disks_main:1020] echo datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[get_disks_main:1020] tr , '\n'
+epprd_rg:process_resources[get_disks_main:1020] xargs
+epprd_rg:process_resources[get_disks_main:1020] sort -u
+epprd_rg:process_resources[get_disks_main:1020] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[get_disks_main:1022] : Get the disks corresponding to these volume groups
+epprd_rg:process_resources[get_disks_main:1024] print hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8
+epprd_rg:process_resources[get_disks_main:1024] read HDISKS DISK_LIST
+epprd_rg:process_resources[get_disks_main:1024] IFS=:
+epprd_rg:process_resources[get_disks_main:1025] HDISKS='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
+epprd_rg:process_resources[get_disks_main:1031] : Pick up any raw disks not returned by clrgpa
+epprd_rg:process_resources[get_disks_main:1033] clodmget -q group='epprd_rg AND name=RAW_DISK' HACMPresource
+epprd_rg:process_resources[get_disks_main:1033] [[ -n '' ]]
+epprd_rg:process_resources[get_disks_main:1042] : Get any raw disks in resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1045] print
+epprd_rg:process_resources[get_disks_main:1045] read RHDISKS RDISK_LIST
+epprd_rg:process_resources[get_disks_main:1045] IFS=:
+epprd_rg:process_resources[get_disks_main:1046] RHDISKS=''
+epprd_rg:process_resources[get_disks_main:1047] print datavg
+epprd_rg:process_resources[get_disks_main:1047] read VOLUME_GROUPS
+epprd_rg:process_resources[get_disks_main:1051] : At this point, the global variables below should be set to
+epprd_rg:process_resources[get_disks_main:1052] : the values associated with resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1054] export RESOURCE_GROUPS
+epprd_rg:process_resources[get_disks_main:1055] export VOLUME_GROUPS
+epprd_rg:process_resources[get_disks_main:1056] export HDISKS
+epprd_rg:process_resources[get_disks_main:1057] export RHDISKS
+epprd_rg:process_resources[get_disks_main:1059] [[ false == true ]]
+epprd_rg:process_resources[get_disks_main:1182] get_disks
+epprd_rg:process_resources[get_disks:1198] PS4_FUNC=get_disks
+epprd_rg:process_resources[get_disks:1198] typeset PS4_FUNC
+epprd_rg:process_resources[get_disks:1199] [[ high == high ]]
+epprd_rg:process_resources[get_disks:1199] set -x
+epprd_rg:process_resources[get_disks:1201] STAT=0
+epprd_rg:process_resources[get_disks:1204] : Most volume groups are Enhanced Concurrent Mode, and it should
+epprd_rg:process_resources[get_disks:1205] : not be necessary to break reserves. If all the volume groups
+epprd_rg:process_resources[get_disks:1206] : are ECM, we should be able to skip breaking reserves. If it
+epprd_rg:process_resources[get_disks:1207] : turns out that there is a reserve on a disk in an ECM volume
+epprd_rg:process_resources[get_disks:1208] : group, that will be handled by cl_pvo making an explicit call
+epprd_rg:process_resources[get_disks:1209] : to cl_disk_available.
+epprd_rg:process_resources[get_disks:1213] all_ecm=TRUE
+epprd_rg:process_resources[get_disks:1214] IFS=:
+epprd_rg:process_resources[get_disks:1214] set -- datavg
+epprd_rg:process_resources[get_disks:1214] print datavg
+epprd_rg:process_resources[get_disks:1216] print datavg
+epprd_rg:process_resources[get_disks:1216] sort -u
+epprd_rg:process_resources[get_disks:1216] tr , '\n'
+epprd_rg:process_resources[get_disks:1218] clodmget -q 'name = datavg and attribute = conc_capable' -f value -n CuAt
+epprd_rg:process_resources[get_disks:1218] [[ y != y ]]
+epprd_rg:process_resources[get_disks:1224] [[ TRUE == FALSE ]]
+epprd_rg:process_resources[get_disks:1226] [[ TRUE == TRUE ]]
+epprd_rg:process_resources[get_disks:1226] return 0
+epprd_rg:process_resources[get_disks_main:1183] STAT=0
+epprd_rg:process_resources[get_disks_main:1186] return 0
+epprd_rg:process_resources[3443] tr ' ' '\n'
+epprd_rg:process_resources[3443] echo
+epprd_rg:process_resources[3443] FAILED_RR_RGS=''
+epprd_rg:process_resources[3444] [[ -n '' ]]
+epprd_rg:process_resources[3450] clodmget -n -q policy=scsi -f value HACMPsplitmerge
+epprd_rg:process_resources[3450] SCSIPR_ENABLED=''
+epprd_rg:process_resources[3450] typeset SCSIPR_ENABLED
+epprd_rg:process_resources[3451] [[ '' == Yes ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:35.284368 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=VGS ACTION=ACQUIRE CONCURRENT_VOLUME_GROUP='""' VOLUME_GROUPS='"datavg"' RESOURCE_GROUPS='"epprd_rg' '"' EXPORT_FILESYSTEM='""'
+epprd_rg:process_resources[1] JOB_TYPE=VGS
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] CONCURRENT_VOLUME_GROUP=''
+epprd_rg:process_resources[1] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] EXPORT_FILESYSTEM=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ VGS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ VGS == ONLINE ]]
+epprd_rg:process_resources[3571] process_volume_groups_main ACQUIRE
+epprd_rg:process_resources[process_volume_groups_main:2293] PS4_FUNC=process_volume_groups_main
+epprd_rg:process_resources[process_volume_groups_main:2293] typeset PS4_FUNC
+epprd_rg:process_resources[process_volume_groups_main:2294] [[ high == high ]]
+epprd_rg:process_resources[process_volume_groups_main:2294] set -x
+epprd_rg:process_resources[process_volume_groups_main:2295] DEF_VARYON_ACTION=0
+epprd_rg:process_resources[process_volume_groups_main:2295] typeset -li DEF_VARYON_ACTION
+epprd_rg:process_resources[process_volume_groups_main:2296] FAILURE_IN_METHOD=0
+epprd_rg:process_resources[process_volume_groups_main:2296] typeset -li FAILURE_IN_METHOD
+epprd_rg:process_resources[process_volume_groups_main:2297] ACTION=ACQUIRE
+epprd_rg:process_resources[process_volume_groups_main:2297] typeset ACTION
+epprd_rg:process_resources[process_volume_groups_main:2298] STAT=0
+epprd_rg:process_resources[process_volume_groups_main:2299] VG_LIST=datavg
+epprd_rg:process_resources[process_volume_groups_main:2300] RG_LIST=epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2304] getReplicatedResources epprd_rg
+epprd_rg:process_resources[getReplicatedResources:699] PS4_FUNC=getReplicatedResources
+epprd_rg:process_resources[getReplicatedResources:699] typeset PS4_FUNC
+epprd_rg:process_resources[getReplicatedResources:700] [[ high == high ]]
+epprd_rg:process_resources[getReplicatedResources:700] set -x
+epprd_rg:process_resources[getReplicatedResources:702] RV=false
+epprd_rg:process_resources[getReplicatedResources:704] clodmget -n -f type HACMPrresmethods
+epprd_rg:process_resources[getReplicatedResources:704] [[ -n 9 ]]
+epprd_rg:process_resources[getReplicatedResources:707] : Replicated resource methods are defined, check for resources
+epprd_rg:process_resources[getReplicatedResources:709] clodmget -q $'name like \'*_REP_RESOURCE\' AND group=epprd_rg' -f value -n HACMPresource
+epprd_rg:process_resources[getReplicatedResources:709] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:718] : Verify if any backup profiles are configured and trigger cbm utilities based on that
+epprd_rg:process_resources[getReplicatedResources:720] clodmget -q name=BACKUP_ENABLED -f value HACMPresource
+epprd_rg:process_resources[getReplicatedResources:720] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:739] echo false
+epprd_rg:process_resources[process_volume_groups_main:2304] REPLICATED_RESOURCES=false
+epprd_rg:process_resources[process_volume_groups_main:2305] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2306] print -- datavg
+epprd_rg:process_resources[process_volume_groups_main:2306] read VOLUME_GROUPS VG_LIST
+epprd_rg:process_resources[process_volume_groups_main:2306] IFS=:
+epprd_rg:process_resources[process_volume_groups_main:2307] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[process_volume_groups_main:2310] : At this point, these variables contain information only for epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2312] export VOLUME_GROUPS
+epprd_rg:process_resources[process_volume_groups_main:2313] export RESOURCE_GROUPS
+epprd_rg:process_resources[process_volume_groups_main:2315] [[ false == true ]]
+epprd_rg:process_resources[process_volume_groups_main:2555] process_volume_groups ACQUIRE
+epprd_rg:process_resources[process_volume_groups:2571] PS4_FUNC=process_volume_groups
+epprd_rg:process_resources[process_volume_groups:2571] typeset PS4_FUNC
+epprd_rg:process_resources[process_volume_groups:2572] [[ high == high ]]
+epprd_rg:process_resources[process_volume_groups:2572] set -x
+epprd_rg:process_resources[process_volume_groups:2573] STAT=0
+epprd_rg:process_resources[process_volume_groups:2575] GROUPNAME=epprd_rg
+epprd_rg:process_resources[process_volume_groups:2575] export GROUPNAME
+epprd_rg:process_resources[process_volume_groups:2578] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[process_volume_groups:2581] : Varyon the VGs in the environment
+epprd_rg:process_resources[process_volume_groups:2583] cl_activate_vgs -n
+epprd_rg:cl_activate_vgs[213] [[ high == high ]]
+epprd_rg:cl_activate_vgs[213] version=1.46
+epprd_rg:cl_activate_vgs[215] STATUS=0
+epprd_rg:cl_activate_vgs[215] typeset -li STATUS
+epprd_rg:cl_activate_vgs[216] SYNCFLAG=''
+epprd_rg:cl_activate_vgs[217] CLENV=''
+epprd_rg:cl_activate_vgs[218] TMP_FILENAME=/tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[219] USE_OEM_METHODS=false
+epprd_rg:cl_activate_vgs[221] PROC_RES=false
+epprd_rg:cl_activate_vgs[225] [[ VGS != 0 ]]
+epprd_rg:cl_activate_vgs[225] [[ VGS != GROUP ]]
+epprd_rg:cl_activate_vgs[226] PROC_RES=true
+epprd_rg:cl_activate_vgs[232] [[ -n == -n ]]
+epprd_rg:cl_activate_vgs[234] SYNCFLAG=-n
+epprd_rg:cl_activate_vgs[235] shift
+epprd_rg:cl_activate_vgs[240] (( 0 != 0 ))
+epprd_rg:cl_activate_vgs[247] set -u
+epprd_rg:cl_activate_vgs[250] rm -f /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[254] lsvg -L -o
+epprd_rg:cl_activate_vgs[254] print caavg_private rootvg
+epprd_rg:cl_activate_vgs[254] VGSTATUS='caavg_private rootvg'
+epprd_rg:cl_activate_vgs[257] ALLVGS=All_volume_groups
+epprd_rg:cl_activate_vgs[258] cl_RMupdate resource_acquiring All_volume_groups cl_activate_vgs
2023-01-28T17:10:35.356061
2023-01-28T17:10:35.360470
+epprd_rg:cl_activate_vgs[262] [[ true == false ]]
+epprd_rg:cl_activate_vgs[285] LIST_OF_VOLUME_GROUPS_FOR_RG=''
+epprd_rg:cl_activate_vgs[289] export GROUPNAME
+epprd_rg:cl_activate_vgs[291] echo datavg
+epprd_rg:cl_activate_vgs[291] read LIST_OF_VOLUME_GROUPS_FOR_RG VOLUME_GROUPS
+epprd_rg:cl_activate_vgs[291] IFS=:
+epprd_rg:cl_activate_vgs[294] echo datavg
+epprd_rg:cl_activate_vgs[296] sort -u
+epprd_rg:cl_activate_vgs[295] tr , '\n'
+epprd_rg:cl_activate_vgs[294] LIST_OF_VOLUME_GROUPS_FOR_RG=datavg
+epprd_rg:cl_activate_vgs[298] vgs_list datavg
+epprd_rg:cl_activate_vgs[vgs_list:178] PS4_LOOP=''
+epprd_rg:cl_activate_vgs[vgs_list:178] typeset PS4_LOOP
+epprd_rg:cl_activate_vgs:datavg[vgs_list:182] PS4_LOOP=datavg
+epprd_rg:cl_activate_vgs:datavg[vgs_list:186] [[ 'caavg_private rootvg' == @(?(*\ )datavg?(\ *)) ]]
+epprd_rg:cl_activate_vgs:datavg[vgs_list:192] : call varyon for the volume group in Foreground
+epprd_rg:cl_activate_vgs:datavg[vgs_list:194] vgs_chk datavg -n cl_activate_vgs
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:78] VG=datavg
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:78] typeset VG
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:79] SYNCFLAG=-n
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:79] typeset SYNCFLAG
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:80] PROGNAME=cl_activate_vgs
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:80] typeset PROGNAME
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:81] STATUS=0
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:81] typeset -li STATUS
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:83] [[ -n '' ]]
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:100] amlog_trace '' 'Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(0.052):datavg[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_vgs(0.053):datavg[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_vgs(0.078):datavg[amlog_trace:319] cltime
+epprd_rg:cl_activate_vgs(0.080):datavg[amlog_trace:319] DATE=2023-01-28T17:10:35.397679
+epprd_rg:cl_activate_vgs(0.080):datavg[amlog_trace:320] echo '|2023-01-28T17:10:35.397679|INFO: Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(0.080):datavg[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_vgs(0.080):datavg[vgs_chk:102] typeset -x ERRMSG
+epprd_rg:cl_activate_vgs(0.080):datavg[vgs_chk:103] clvaryonvg -n datavg
+epprd_rg:clvaryonvg(0.009):datavg[985] version=1.21.7.22
+epprd_rg:clvaryonvg(0.009):datavg[989] : Without this test, cause of failure due to non-root may not be obvious
+epprd_rg:clvaryonvg(0.009):datavg[991] [[ -z '' ]]
+epprd_rg:clvaryonvg(0.009):datavg[991] id -nu
+epprd_rg:clvaryonvg(0.010):datavg[991] 2> /dev/null
+epprd_rg:clvaryonvg(0.012):datavg[991] user_name=root
+epprd_rg:clvaryonvg(0.012):datavg[994] : Check if RBAC is enabled
+epprd_rg:clvaryonvg(0.012):datavg[996] is_rbac_enabled=''
+epprd_rg:clvaryonvg(0.012):datavg[996] typeset is_rbac_enabled
+epprd_rg:clvaryonvg(0.012):datavg[997] clodmget -nq group='LDAPClient and name=RBACConfig' -f value HACMPLDAP
+epprd_rg:clvaryonvg(0.013):datavg[997] 2> /dev/null
+epprd_rg:clvaryonvg(0.015):datavg[997] is_rbac_enabled=''
+epprd_rg:clvaryonvg(0.016):datavg[999] role=''
+epprd_rg:clvaryonvg(0.016):datavg[999] typeset role
+epprd_rg:clvaryonvg(0.016):datavg[1000] [[ root != root ]]
+epprd_rg:clvaryonvg(0.016):datavg[1009] LEAVEOFF=FALSE
+epprd_rg:clvaryonvg(0.016):datavg[1010] FORCEON=''
+epprd_rg:clvaryonvg(0.016):datavg[1011] FORCEUPD=FALSE
+epprd_rg:clvaryonvg(0.016):datavg[1012] NOQUORUM=20
+epprd_rg:clvaryonvg(0.016):datavg[1013] MISSING_UPDATES=30
+epprd_rg:clvaryonvg(0.016):datavg[1014] DATA_DIVERGENCE=31
+epprd_rg:clvaryonvg(0.016):datavg[1015] ARGS=''
+epprd_rg:clvaryonvg(0.016):datavg[1016] typeset -li varyonvg_rc
+epprd_rg:clvaryonvg(0.016):datavg[1017] typeset -li MAXLVS
+epprd_rg:clvaryonvg(0.016):datavg[1018] ENODEV=19
+epprd_rg:clvaryonvg(0.016):datavg[1018] typeset -li ENODEV
+epprd_rg:clvaryonvg(0.016):datavg[1020] set -u
+epprd_rg:clvaryonvg(0.016):datavg[1022] /bin/dspmsg -s 2 cspoc.cat 31 'usage: clvaryonvg [-F] [-f] [-n] [-p] [-s] [-o] \n'
+epprd_rg:clvaryonvg(0.018):datavg[1022] USAGE='usage: clvaryonvg [-F] [-f] [-n] [-p] [-s] [-o] '
+epprd_rg:clvaryonvg(0.018):datavg[1023] (( 2 < 1 ))
+epprd_rg:clvaryonvg(0.018):datavg[1029] : Parse the options
+epprd_rg:clvaryonvg(0.018):datavg[1031] S_FLAG=''
+epprd_rg:clvaryonvg(0.018):datavg[1032] P_FLAG=''
+epprd_rg:clvaryonvg(0.018):datavg[1033] getopts :Ffnops option
+epprd_rg:clvaryonvg(0.018):datavg[1038] : -n Always applied, retained for compatibility
+epprd_rg:clvaryonvg(0.018):datavg[1033] getopts :Ffnops option
+epprd_rg:clvaryonvg(0.018):datavg[1048] : Pick up the volume group name, which follows the options
+epprd_rg:clvaryonvg(0.018):datavg[1050] shift 1
+epprd_rg:clvaryonvg(0.018):datavg[1051] VG=datavg
+epprd_rg:clvaryonvg(0.018):datavg[1054] : Set up filenames we will be using
+epprd_rg:clvaryonvg(0.018):datavg[1056] VGDIR=/usr/es/sbin/cluster/etc/vg/
+epprd_rg:clvaryonvg(0.018):datavg[1057] TSFILE=/usr/es/sbin/cluster/etc/vg/datavg.tstamp
+epprd_rg:clvaryonvg(0.018):datavg[1058] DSFILE=/usr/es/sbin/cluster/etc/vg/datavg.desc
+epprd_rg:clvaryonvg(0.018):datavg[1059] RPFILE=/usr/es/sbin/cluster/etc/vg/datavg.replay
+epprd_rg:clvaryonvg(0.019):datavg[1060] permset=/usr/es/sbin/cluster/etc/vg/datavg.perms
+epprd_rg:clvaryonvg(0.019):datavg[1061] failfile=/usr/es/sbin/cluster/etc/vg/datavg.fail
+epprd_rg:clvaryonvg(0.019):datavg[1065] : Get some LVM information we are going to need in processing this
+epprd_rg:clvaryonvg(0.019):datavg[1066] : volume group:
+epprd_rg:clvaryonvg(0.019):datavg[1067] : - volume group identifier - vgid
+epprd_rg:clvaryonvg(0.019):datavg[1068] : - list of disks
+epprd_rg:clvaryonvg(0.019):datavg[1069] : - quorum indicator
+epprd_rg:clvaryonvg(0.019):datavg[1070] : - timestamp if present
+epprd_rg:clvaryonvg(0.019):datavg[1072] /usr/sbin/getlvodm -v datavg
+epprd_rg:clvaryonvg(0.022):datavg[1072] VGID=00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.023):datavg[1073] cut '-d ' -f2
+epprd_rg:clvaryonvg(0.023):datavg[1073] /usr/sbin/getlvodm -w 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.027):datavg[1073] pvlst=$'hdisk2\nhdisk3\nhdisk4\nhdisk5\nhdisk6\nhdisk7\nhdisk8'
+epprd_rg:clvaryonvg(0.027):datavg[1074] /usr/sbin/getlvodm -Q datavg
+epprd_rg:clvaryonvg(0.030):datavg[1074] quorum=y
+epprd_rg:clvaryonvg(0.030):datavg[1075] TS_FROM_DISK=''
+epprd_rg:clvaryonvg(0.030):datavg[1076] TS_FROM_ODM=''
+epprd_rg:clvaryonvg(0.030):datavg[1077] GOOD_PV=''
+epprd_rg:clvaryonvg(0.030):datavg[1078] O_flag=''
+epprd_rg:clvaryonvg(0.030):datavg[1079] A_flag=''
+epprd_rg:clvaryonvg(0.030):datavg[1080] mode_flag=''
+epprd_rg:clvaryonvg(0.030):datavg[1081] vg_on_mode=''
+epprd_rg:clvaryonvg(0.030):datavg[1082] vg_set_passive=FALSE
+epprd_rg:clvaryonvg(0.030):datavg[1084] odmget -q 'attribute = varyon_state' PdAt
+epprd_rg:clvaryonvg(0.033):datavg[1084] [[ -n $'\nPdAt:\n\tuniquetype = "logical_volume/vgsubclass/vgtype"\n\tattribute = "varyon_state"\n\tdeflt = "0"\n\tvalues = "0,1,2,3"\n\twidth = ""\n\ttype = "R"\n\tgeneric = ""\n\trep = "l"\n\tnls_index = 0' ]]
+epprd_rg:clvaryonvg(0.033):datavg[1087] : LVM may record that a volume group was varied on from an earlier
+epprd_rg:clvaryonvg(0.033):datavg[1088] : IPL. Rely on HA state tracking, and override the LVM check
+epprd_rg:clvaryonvg(0.033):datavg[1090] O_flag=-O
+epprd_rg:clvaryonvg(0.033):datavg[1093] : Checking if SCSI PR is enabled and it is so,
+epprd_rg:clvaryonvg(0.033):datavg[1094] : confirming if the SCSI PR reservations are intact.
+epprd_rg:clvaryonvg(0.034):datavg[1096] lssrc -ls clstrmgrES
+epprd_rg:clvaryonvg(0.034):datavg[1096] 2>& 1
+epprd_rg:clvaryonvg(0.035):datavg[1096] grep 'Current state:'
+epprd_rg:clvaryonvg(0.037):datavg[1096] egrep -q -v 'ST_INIT|NOT_CONFIGURED'
+epprd_rg:clvaryonvg(0.050):datavg[1098] clodmget -n -q policy=scsi -f value HACMPsplitmerge
+epprd_rg:clvaryonvg(0.053):datavg[1098] SCSIPR_ENABLED=''
+epprd_rg:clvaryonvg(0.053):datavg[1098] typeset SCSIPR_ENABLED
+epprd_rg:clvaryonvg(0.053):datavg[1099] clodmget -q $'name like \'*VOLUME_GROUP\' and value = datavg' -f group -n HACMPresource
+epprd_rg:clvaryonvg(0.056):datavg[1099] resgrp=epprd_rg
+epprd_rg:clvaryonvg(0.056):datavg[1099] typeset resgrp
+epprd_rg:clvaryonvg(0.056):datavg[1100] [[ '' == Yes ]]
+epprd_rg:clvaryonvg(0.056):datavg[1134] : Operations such as varying on the volume group are likely to
+epprd_rg:clvaryonvg(0.056):datavg[1135] : require read/write access. So, set any volume group fencing appropriately.
+epprd_rg:clvaryonvg(0.056):datavg[1137] cl_set_vg_fence_height -c datavg rw
+epprd_rg:clvaryonvg(0.060):datavg[1138] RC=0
+epprd_rg:clvaryonvg(0.060):datavg[1139] (( 19 == 0 ))
+epprd_rg:clvaryonvg(0.060):datavg[1147] : Return code from volume group fencing for datavg is 0
+epprd_rg:clvaryonvg(0.060):datavg[1148] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.060):datavg[1160] : Check on the current state of the volume group
+epprd_rg:clvaryonvg(0.062):datavg[1182] grep -x -q datavg
+epprd_rg:clvaryonvg(0.062):datavg[1182] lsvg -L
+epprd_rg:clvaryonvg(0.065):datavg[1184] : The volume group is known - check to see if its already varyd on.
+epprd_rg:clvaryonvg(0.066):datavg[1186] grep -x -q datavg
+epprd_rg:clvaryonvg(0.066):datavg[1186] lsvg -L -o
+epprd_rg:clvaryonvg(0.070):datavg[1190] lsvg -L datavg
+epprd_rg:clvaryonvg(0.070):datavg[1190] 2> /dev/null
+epprd_rg:clvaryonvg(0.070):datavg[1190] grep -q -i -w passive-only
+epprd_rg:clvaryonvg(0.112):datavg[1191] vg_on_mode=passive
+epprd_rg:clvaryonvg(0.114):datavg[1194] grep -iw removed
+epprd_rg:clvaryonvg(0.114):datavg[1194] lsvg -p datavg
+epprd_rg:clvaryonvg(0.114):datavg[1194] 2> /dev/null
+epprd_rg:clvaryonvg(0.134):datavg[1194] removed_disks=''
+epprd_rg:clvaryonvg(0.134):datavg[1195] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.134):datavg[1213] [[ -n passive ]]
+epprd_rg:clvaryonvg(0.134):datavg[1215] lqueryvg -g 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.135):datavg[1215] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.154):datavg[1321] :
+epprd_rg:clvaryonvg(0.154):datavg[1322] : First, sniff at the disk to see if the local ODM information
+epprd_rg:clvaryonvg(0.154):datavg[1323] : matches what is on the disk.
+epprd_rg:clvaryonvg(0.154):datavg[1324] :
+epprd_rg:clvaryonvg(0.154):datavg[1326] vgdatimestamps
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:201] PS4_FUNC=vgdatimestamps
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:201] typeset PS4_FUNC
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:202] [[ high == high ]]
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:202] set -x
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:203] set -u
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:206] : See what timestamp LVM has recorded from the last time it checked
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:207] : the disks
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:209] /usr/sbin/getlvodm -T 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.155):datavg[vgdatimestamps:209] 2> /dev/null
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:209] TS_FROM_ODM=63d4d463310b8939
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:212] : Check to see if HACMP is maintaining a timestamp for this volume group
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:213] : Needed for some older volume groups
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:215] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.tstamp ]]
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:234] : Get the time stamp from the actual disk
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:236] clvgdats /dev/datavg
+epprd_rg:clvaryonvg(0.159):datavg[vgdatimestamps:236] 2> /dev/null
+epprd_rg:clvaryonvg(0.168):datavg[vgdatimestamps:236] TS_FROM_DISK=63d4d463310b8939
+epprd_rg:clvaryonvg(0.168):datavg[vgdatimestamps:237] clvgdats_rc=0
+epprd_rg:clvaryonvg(0.168):datavg[vgdatimestamps:238] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.168):datavg[vgdatimestamps:247] [[ -z 63d4d463310b8939 ]]
+epprd_rg:clvaryonvg(0.168):datavg[1328] [[ 63d4d463310b8939 != 63d4d463310b8939 ]]
+epprd_rg:clvaryonvg(0.168):datavg[1344] : There is a chance that a VG that should be in passive mode is not.
+epprd_rg:clvaryonvg(0.168):datavg[1345] : Run cl_pvo to put it in passive mode if possible.
+epprd_rg:clvaryonvg(0.168):datavg[1350] [[ -z passive ]]
+epprd_rg:clvaryonvg(0.168):datavg[1350] [[ passive == ordinary ]]
+epprd_rg:clvaryonvg(0.168):datavg[1350] [[ passive == passive ]]
+epprd_rg:clvaryonvg(0.168):datavg[1350] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.168):datavg[1381] : Let us assume that the old style synclvodm would sync all the PV/FS changes.
+epprd_rg:clvaryonvg(0.168):datavg[1383] expimpvg_notrequired=1
+epprd_rg:clvaryonvg(0.168):datavg[1386] : Optimistically give varyonvg a try.
+epprd_rg:clvaryonvg(0.168):datavg[1388] [[ passive == passive ]]
+epprd_rg:clvaryonvg(0.168):datavg[1391] : If the volume group was varyd on in passive mode when this node came
+epprd_rg:clvaryonvg(0.168):datavg[1392] : up, flip it over to active mode. Following logic will then fall
+epprd_rg:clvaryonvg(0.168):datavg[1393] : through to updatefs.
+epprd_rg:clvaryonvg(0.168):datavg[1395] [[ passive == passive ]]
+epprd_rg:clvaryonvg(0.168):datavg[1395] A_flag=-A
+epprd_rg:clvaryonvg(0.168):datavg[1396] varyonvg -n -c -A -O datavg
+epprd_rg:clvaryonvg(0.169):datavg[1396] 2>& 1
+epprd_rg:clvaryonvg(0.388):datavg[1396] varyonvg_output=''
+epprd_rg:clvaryonvg(0.388):datavg[1397] varyonvg_rc=0
+epprd_rg:clvaryonvg(0.388):datavg[1397] typeset -li varyonvg_rc
+epprd_rg:clvaryonvg(0.388):datavg[1399] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.388):datavg[1481] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.388):datavg[1576] : At this point, datavg should be varied on
+epprd_rg:clvaryonvg(0.389):datavg[1578] [[ FALSE == TRUE ]]
+epprd_rg:clvaryonvg(0.389):datavg[1585] [[ -z 63d4d463310b8939 ]]
+epprd_rg:clvaryonvg(0.389):datavg[1592] vgdatimestamps
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:201] PS4_FUNC=vgdatimestamps
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:201] typeset PS4_FUNC
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:202] [[ high == high ]]
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:202] set -x
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:203] set -u
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:206] : See what timestamp LVM has recorded from the last time it checked
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:207] : the disks
+epprd_rg:clvaryonvg(0.389):datavg[vgdatimestamps:209] /usr/sbin/getlvodm -T 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.390):datavg[vgdatimestamps:209] 2> /dev/null
+epprd_rg:clvaryonvg(0.392):datavg[vgdatimestamps:209] TS_FROM_ODM=63d4d87b2421bec0
+epprd_rg:clvaryonvg(0.392):datavg[vgdatimestamps:212] : Check to see if HACMP is maintaining a timestamp for this volume group
+epprd_rg:clvaryonvg(0.392):datavg[vgdatimestamps:213] : Needed for some older volume groups
+epprd_rg:clvaryonvg(0.392):datavg[vgdatimestamps:215] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.tstamp ]]
+epprd_rg:clvaryonvg(0.392):datavg[vgdatimestamps:234] : Get the time stamp from the actual disk
+epprd_rg:clvaryonvg(0.392):datavg[vgdatimestamps:236] clvgdats /dev/datavg
+epprd_rg:clvaryonvg(0.393):datavg[vgdatimestamps:236] 2> /dev/null
+epprd_rg:clvaryonvg(0.403):datavg[vgdatimestamps:236] TS_FROM_DISK=63d4d87b2421bec0
+epprd_rg:clvaryonvg(0.403):datavg[vgdatimestamps:237] clvgdats_rc=0
+epprd_rg:clvaryonvg(0.403):datavg[vgdatimestamps:238] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.403):datavg[vgdatimestamps:247] [[ -z 63d4d87b2421bec0 ]]
+epprd_rg:clvaryonvg(0.403):datavg[1600] [[ 63d4d87b2421bec0 != 63d4d87b2421bec0 ]]
+epprd_rg:clvaryonvg(0.403):datavg[1622] [[ FALSE == TRUE ]]
+epprd_rg:clvaryonvg(0.403):datavg[1633] : Even if everything looks OK, update the local file system
+epprd_rg:clvaryonvg(0.403):datavg[1634] : definitions, since changes there do not show up in the
+epprd_rg:clvaryonvg(0.403):datavg[1635] : VGDA timestamps
+epprd_rg:clvaryonvg(0.403):datavg[1637] updatefs datavg
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:506] PS4_FUNC=updatefs
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:506] typeset PS4_FUNC
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:507] [[ high == high ]]
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:507] set -x
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:508] do_imfs=''
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:508] typeset do_imfs
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:509] has_typed_lvs=''
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:509] typeset has_typed_lvs
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:512] : Delete existing filesystem information for this volume group. This is
+epprd_rg:clvaryonvg(0.403):datavg[updatefs:513] : needed because imfs will not update an existing /etc/filesystems entry.
+epprd_rg:clvaryonvg(0.405):datavg[updatefs:515] cut -f1 '-d '
+epprd_rg:clvaryonvg(0.405):datavg[updatefs:515] /usr/sbin/getlvodm -L datavg
+epprd_rg:clvaryonvg(0.409):datavg[updatefs:515] lv_list=$'saplv\nsapmntlv\noraclelv\nepplv\noraarchlv\nsapdata1lv\nsapdata2lv\nsapdata3lv\nsapdata4lv\nboardlv\noriglogAlv\noriglogBlv\nmirrlogAlv\nmirrlogBlv\nepprdaloglv'
+epprd_rg:clvaryonvg(0.409):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.409):datavg[updatefs:521] clodmget -q 'name = saplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.412):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.412):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.412):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.412):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.412):datavg[updatefs:530] /usr/sbin/getlvcb -f saplv
+epprd_rg:clvaryonvg(0.413):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.431):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.431):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.431):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.433):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.433):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.436):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.436):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.436):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.436):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.438):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.457):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.457):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.457):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.457):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.459):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.459):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.462):datavg[updatefs:545] /usr/sbin/imfs -lx saplv
+epprd_rg:clvaryonvg(0.466):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.466):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.466):datavg[updatefs:521] clodmget -q 'name = sapmntlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.469):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.469):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.469):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.469):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.469):datavg[updatefs:530] /usr/sbin/getlvcb -f sapmntlv
+epprd_rg:clvaryonvg(0.470):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.488):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.488):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.488):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.490):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.490):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.494):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.494):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.494):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.494):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.495):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.513):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.513):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.513):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.513):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.514):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.514):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.518):datavg[updatefs:545] /usr/sbin/imfs -lx sapmntlv
+epprd_rg:clvaryonvg(0.522):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.522):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.522):datavg[updatefs:521] clodmget -q 'name = oraclelv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.525):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.525):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.525):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.525):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.526):datavg[updatefs:530] /usr/sbin/getlvcb -f oraclelv
+epprd_rg:clvaryonvg(0.526):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.544):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.544):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.544):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.546):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.546):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.550):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.550):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.550):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.550):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.551):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.570):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.570):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.570):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.570):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.571):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.571):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.574):datavg[updatefs:545] /usr/sbin/imfs -lx oraclelv
+epprd_rg:clvaryonvg(0.578):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.578):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.579):datavg[updatefs:521] clodmget -q 'name = epplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.582):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.582):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.582):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.582):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.582):datavg[updatefs:530] /usr/sbin/getlvcb -f epplv
+epprd_rg:clvaryonvg(0.583):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.601):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.601):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.601):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.602):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.603):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.606):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.606):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.606):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.606):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.608):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.628):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.628):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.628):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.628):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.629):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.629):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.632):datavg[updatefs:545] /usr/sbin/imfs -lx epplv
+epprd_rg:clvaryonvg(0.637):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.637):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.637):datavg[updatefs:521] clodmget -q 'name = oraarchlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.641):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.641):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.641):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.641):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.641):datavg[updatefs:530] /usr/sbin/getlvcb -f oraarchlv
+epprd_rg:clvaryonvg(0.642):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.661):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.661):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.661):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.662):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.662):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.666):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.666):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.666):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.666):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.667):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.686):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.686):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.686):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.686):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.687):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.687):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.691):datavg[updatefs:545] /usr/sbin/imfs -lx oraarchlv
+epprd_rg:clvaryonvg(0.695):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.695):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.695):datavg[updatefs:521] clodmget -q 'name = sapdata1lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.698):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.698):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.698):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.698):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.698):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata1lv
+epprd_rg:clvaryonvg(0.699):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.717):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.717):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.717):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.719):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.719):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.722):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.722):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.722):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.722):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.724):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.743):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.743):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.743):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.743):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.744):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.744):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.747):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata1lv
+epprd_rg:clvaryonvg(0.751):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.751):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.751):datavg[updatefs:521] clodmget -q 'name = sapdata2lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.754):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.755):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.755):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.755):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.755):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata2lv
+epprd_rg:clvaryonvg(0.756):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.773):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.773):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.773):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.775):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.775):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.778):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.778):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.778):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.778):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.780):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.799):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.799):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.799):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.799):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.800):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.800):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.803):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata2lv
+epprd_rg:clvaryonvg(0.807):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.807):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.807):datavg[updatefs:521] clodmget -q 'name = sapdata3lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.811):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.811):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.811):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.811):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.811):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata3lv
+epprd_rg:clvaryonvg(0.812):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.829):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.829):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.829):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.831):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.831):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.835):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.835):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.835):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.835):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.836):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.855):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.855):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.855):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.855):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.856):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.856):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.859):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata3lv
+epprd_rg:clvaryonvg(0.863):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.863):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.863):datavg[updatefs:521] clodmget -q 'name = sapdata4lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.867):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.867):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.867):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.867):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.867):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata4lv
+epprd_rg:clvaryonvg(0.868):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.885):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.885):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.885):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.887):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.887):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.891):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.891):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.891):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.891):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.892):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.912):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.912):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.912):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.912):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.913):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.913):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.916):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata4lv
+epprd_rg:clvaryonvg(0.920):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.920):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.920):datavg[updatefs:521] clodmget -q 'name = boardlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.924):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.924):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.924):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.924):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.924):datavg[updatefs:530] /usr/sbin/getlvcb -f boardlv
+epprd_rg:clvaryonvg(0.925):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.945):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.945):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(0.945):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.946):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.946):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(0.950):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.950):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.950):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.950):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.951):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.970):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.970):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.970):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.970):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.971):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(0.971):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.974):datavg[updatefs:545] /usr/sbin/imfs -lx boardlv
+epprd_rg:clvaryonvg(0.979):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.979):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.979):datavg[updatefs:521] clodmget -q 'name = origlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.982):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.982):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.982):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.982):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.982):datavg[updatefs:530] /usr/sbin/getlvcb -f origlogAlv
+epprd_rg:clvaryonvg(0.983):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.000):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.000):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(1.000):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.002):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(1.003):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.007):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.007):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.007):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.007):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.008):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.026):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.026):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.026):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.026):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.027):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.028):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.031):datavg[updatefs:545] /usr/sbin/imfs -lx origlogAlv
+epprd_rg:clvaryonvg(1.035):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.035):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.035):datavg[updatefs:521] clodmget -q 'name = origlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.039):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.039):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.039):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.039):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.039):datavg[updatefs:530] /usr/sbin/getlvcb -f origlogBlv
+epprd_rg:clvaryonvg(1.040):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.057):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.057):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(1.057):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.058):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(1.059):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.063):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.063):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.063):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.063):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.064):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.082):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.082):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.082):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.082):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.083):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.084):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.087):datavg[updatefs:545] /usr/sbin/imfs -lx origlogBlv
+epprd_rg:clvaryonvg(1.091):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.091):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.091):datavg[updatefs:521] clodmget -q 'name = mirrlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.094):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.094):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.094):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.094):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.094):datavg[updatefs:530] /usr/sbin/getlvcb -f mirrlogAlv
+epprd_rg:clvaryonvg(1.095):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.111):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.111):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(1.111):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.113):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(1.114):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.117):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.117):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.117):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.118):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.119):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.136):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.136):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.136):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.136):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.137):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.139):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.141):datavg[updatefs:545] /usr/sbin/imfs -lx mirrlogAlv
+epprd_rg:clvaryonvg(1.145):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.145):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.145):datavg[updatefs:521] clodmget -q 'name = mirrlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.148):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.148):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.149):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.149):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.149):datavg[updatefs:530] /usr/sbin/getlvcb -f mirrlogBlv
+epprd_rg:clvaryonvg(1.149):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.166):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.166):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(1.166):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.168):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false
+epprd_rg:clvaryonvg(1.169):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.172):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.172):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.172):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.172):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.173):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.191):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.191):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.191):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.191):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.192):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.193):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.196):datavg[updatefs:545] /usr/sbin/imfs -lx mirrlogBlv
+epprd_rg:clvaryonvg(1.200):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.200):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.200):datavg[updatefs:521] clodmget -q 'name = epprdaloglv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.203):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.203):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.203):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.203):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.203):datavg[updatefs:530] /usr/sbin/getlvcb -f epprdaloglv
+epprd_rg:clvaryonvg(1.204):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:530] fs_info=' '
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:531] [[ -n ' ' ]]
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:531] [[ ' ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:552] [[ -n true ]]
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:556] : Pick up any file system changes that may have happened when
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:557] : the volume group was owned by another node. That is, if a
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:558] : local change was made - not through C-SPOC, we whould have no
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:559] : indication it happened.
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:561] [[ -z '' ]]
+epprd_rg:clvaryonvg(1.221):datavg[updatefs:563] /usr/sbin/imfs datavg
+epprd_rg:clvaryonvg(1.888):datavg[updatefs:589] : For a valid file system configuration, the mount point in
+epprd_rg:clvaryonvg(1.888):datavg[updatefs:590] : /etc/filesystems for the logical volume should match the
+epprd_rg:clvaryonvg(1.888):datavg[updatefs:591] : label of the logical volume. The above imfs should have
+epprd_rg:clvaryonvg(1.888):datavg[updatefs:592] : matched those two. Now, check that they match the label
+epprd_rg:clvaryonvg(1.888):datavg[updatefs:593] : for the logical volume as saved in ODM.
+epprd_rg:clvaryonvg(1.888):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.888):datavg[updatefs:600] clodmget -q 'name = saplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.892):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.892):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(1.892):datavg[updatefs:607] /usr/sbin/getlvcb -f saplv
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.909):datavg[updatefs:623] : Label and file system type from LVCB on disk for saplv
+epprd_rg:clvaryonvg(1.910):datavg[updatefs:625] getlvcb -T -A saplv
+epprd_rg:clvaryonvg(1.910):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(1.913):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(1.916):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(1.918):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(1.932):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(1.932):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(1.932):datavg[updatefs:632] : Mount point in /etc/filesystems for saplv
+epprd_rg:clvaryonvg(1.934):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/saplv$' /etc/filesystems
+epprd_rg:clvaryonvg(1.936):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(1.938):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(1.941):datavg[updatefs:634] fs_mount_point=/usr/sap
+epprd_rg:clvaryonvg(1.941):datavg[updatefs:637] : CuAt label attribute for saplv
+epprd_rg:clvaryonvg(1.941):datavg[updatefs:639] clodmget -q 'name = saplv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(1.944):datavg[updatefs:639] CuAt_label=/usr/sap
+epprd_rg:clvaryonvg(1.946):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(1.947):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:657] [[ -z /usr/sap ]]
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:657] [[ /usr/sap == None ]]
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:665] [[ /usr/sap == /usr/sap ]]
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:665] [[ /usr/sap != /usr/sap ]]
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:685] [[ /usr/sap != /usr/sap ]]
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.950):datavg[updatefs:600] clodmget -q 'name = sapmntlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.953):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.954):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(1.954):datavg[updatefs:607] /usr/sbin/getlvcb -f sapmntlv
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.971):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapmntlv
+epprd_rg:clvaryonvg(1.972):datavg[updatefs:625] getlvcb -T -A sapmntlv
+epprd_rg:clvaryonvg(1.972):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(1.975):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(1.978):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(1.980):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(1.993):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(1.993):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(1.993):datavg[updatefs:632] : Mount point in /etc/filesystems for sapmntlv
+epprd_rg:clvaryonvg(1.995):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapmntlv$' /etc/filesystems
+epprd_rg:clvaryonvg(1.997):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(1.999):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.002):datavg[updatefs:634] fs_mount_point=/sapmnt
+epprd_rg:clvaryonvg(2.002):datavg[updatefs:637] : CuAt label attribute for sapmntlv
+epprd_rg:clvaryonvg(2.002):datavg[updatefs:639] clodmget -q 'name = sapmntlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.006):datavg[updatefs:639] CuAt_label=/sapmnt
+epprd_rg:clvaryonvg(2.007):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.008):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.011):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.012):datavg[updatefs:657] [[ -z /sapmnt ]]
+epprd_rg:clvaryonvg(2.012):datavg[updatefs:657] [[ /sapmnt == None ]]
+epprd_rg:clvaryonvg(2.012):datavg[updatefs:665] [[ /sapmnt == /sapmnt ]]
+epprd_rg:clvaryonvg(2.012):datavg[updatefs:665] [[ /sapmnt != /sapmnt ]]
+epprd_rg:clvaryonvg(2.012):datavg[updatefs:685] [[ /sapmnt != /sapmnt ]]
+epprd_rg:clvaryonvg(2.012):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.012):datavg[updatefs:600] clodmget -q 'name = oraclelv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.015):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.015):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.015):datavg[updatefs:607] /usr/sbin/getlvcb -f oraclelv
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.032):datavg[updatefs:623] : Label and file system type from LVCB on disk for oraclelv
+epprd_rg:clvaryonvg(2.033):datavg[updatefs:625] getlvcb -T -A oraclelv
+epprd_rg:clvaryonvg(2.033):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.037):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.040):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.042):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.054):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.054):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.054):datavg[updatefs:632] : Mount point in /etc/filesystems for oraclelv
+epprd_rg:clvaryonvg(2.056):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/oraclelv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.058):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.060):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.063):datavg[updatefs:634] fs_mount_point=/oracle
+epprd_rg:clvaryonvg(2.063):datavg[updatefs:637] : CuAt label attribute for oraclelv
+epprd_rg:clvaryonvg(2.063):datavg[updatefs:639] clodmget -q 'name = oraclelv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.067):datavg[updatefs:639] CuAt_label=/oracle
+epprd_rg:clvaryonvg(2.068):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.069):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.072):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.072):datavg[updatefs:657] [[ -z /oracle ]]
+epprd_rg:clvaryonvg(2.073):datavg[updatefs:657] [[ /oracle == None ]]
+epprd_rg:clvaryonvg(2.073):datavg[updatefs:665] [[ /oracle == /oracle ]]
+epprd_rg:clvaryonvg(2.073):datavg[updatefs:665] [[ /oracle != /oracle ]]
+epprd_rg:clvaryonvg(2.073):datavg[updatefs:685] [[ /oracle != /oracle ]]
+epprd_rg:clvaryonvg(2.073):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.073):datavg[updatefs:600] clodmget -q 'name = epplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.076):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.076):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.076):datavg[updatefs:607] /usr/sbin/getlvcb -f epplv
+epprd_rg:clvaryonvg(2.093):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.094):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.094):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.094):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.094):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.094):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.094):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.094):datavg[updatefs:623] : Label and file system type from LVCB on disk for epplv
+epprd_rg:clvaryonvg(2.095):datavg[updatefs:625] getlvcb -T -A epplv
+epprd_rg:clvaryonvg(2.095):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.098):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.101):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.103):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.115):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.115):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.115):datavg[updatefs:632] : Mount point in /etc/filesystems for epplv
+epprd_rg:clvaryonvg(2.117):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/epplv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.119):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.121):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.124):datavg[updatefs:634] fs_mount_point=/oracle/EPP
+epprd_rg:clvaryonvg(2.124):datavg[updatefs:637] : CuAt label attribute for epplv
+epprd_rg:clvaryonvg(2.124):datavg[updatefs:639] clodmget -q 'name = epplv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.128):datavg[updatefs:639] CuAt_label=/oracle/EPP
+epprd_rg:clvaryonvg(2.129):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.130):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:657] [[ -z /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:657] [[ /oracle/EPP == None ]]
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:665] [[ /oracle/EPP == /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:665] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:685] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.134):datavg[updatefs:600] clodmget -q 'name = oraarchlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.137):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.137):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.137):datavg[updatefs:607] /usr/sbin/getlvcb -f oraarchlv
+epprd_rg:clvaryonvg(2.154):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.154):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.154):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.154):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.155):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.155):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.155):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.155):datavg[updatefs:623] : Label and file system type from LVCB on disk for oraarchlv
+epprd_rg:clvaryonvg(2.155):datavg[updatefs:625] getlvcb -T -A oraarchlv
+epprd_rg:clvaryonvg(2.156):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.159):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.162):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.164):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.177):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.177):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.177):datavg[updatefs:632] : Mount point in /etc/filesystems for oraarchlv
+epprd_rg:clvaryonvg(2.178):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/oraarchlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.181):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.183):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.186):datavg[updatefs:634] fs_mount_point=/oracle/EPP/oraarch
+epprd_rg:clvaryonvg(2.186):datavg[updatefs:637] : CuAt label attribute for oraarchlv
+epprd_rg:clvaryonvg(2.186):datavg[updatefs:639] clodmget -q 'name = oraarchlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.189):datavg[updatefs:639] CuAt_label=/oracle/EPP/oraarch
+epprd_rg:clvaryonvg(2.191):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.191):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.195):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.195):datavg[updatefs:657] [[ -z /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.195):datavg[updatefs:657] [[ /oracle/EPP/oraarch == None ]]
+epprd_rg:clvaryonvg(2.195):datavg[updatefs:665] [[ /oracle/EPP/oraarch == /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.196):datavg[updatefs:665] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.196):datavg[updatefs:685] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.196):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.196):datavg[updatefs:600] clodmget -q 'name = sapdata1lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.199):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.199):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.199):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata1lv
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.216):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata1lv
+epprd_rg:clvaryonvg(2.217):datavg[updatefs:625] getlvcb -T -A sapdata1lv
+epprd_rg:clvaryonvg(2.217):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.221):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.224):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.225):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.238):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.238):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.238):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata1lv
+epprd_rg:clvaryonvg(2.240):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata1lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.242):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.244):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.247):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata1
+epprd_rg:clvaryonvg(2.247):datavg[updatefs:637] : CuAt label attribute for sapdata1lv
+epprd_rg:clvaryonvg(2.247):datavg[updatefs:639] clodmget -q 'name = sapdata1lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.250):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata1
+epprd_rg:clvaryonvg(2.252):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.253):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:657] [[ /oracle/EPP/sapdata1 == None ]]
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:665] [[ /oracle/EPP/sapdata1 == /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:665] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:685] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.256):datavg[updatefs:600] clodmget -q 'name = sapdata2lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.260):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.260):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.260):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata2lv
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.277):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata2lv
+epprd_rg:clvaryonvg(2.278):datavg[updatefs:625] getlvcb -T -A sapdata2lv
+epprd_rg:clvaryonvg(2.279):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.282):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.285):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.287):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.300):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.300):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.300):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata2lv
+epprd_rg:clvaryonvg(2.302):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.303):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata2lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.305):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.309):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata2
+epprd_rg:clvaryonvg(2.309):datavg[updatefs:637] : CuAt label attribute for sapdata2lv
+epprd_rg:clvaryonvg(2.309):datavg[updatefs:639] clodmget -q 'name = sapdata2lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.312):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata2
+epprd_rg:clvaryonvg(2.314):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.315):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:657] [[ /oracle/EPP/sapdata2 == None ]]
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:665] [[ /oracle/EPP/sapdata2 == /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:665] [[ /oracle/EPP/sapdata2 != /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:685] [[ /oracle/EPP/sapdata2 != /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.318):datavg[updatefs:600] clodmget -q 'name = sapdata3lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.321):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.321):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.321):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata3lv
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.339):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata3lv
+epprd_rg:clvaryonvg(2.340):datavg[updatefs:625] getlvcb -T -A sapdata3lv
+epprd_rg:clvaryonvg(2.340):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.344):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.346):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.348):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.362):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.362):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.362):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata3lv
+epprd_rg:clvaryonvg(2.363):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata3lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.366):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.367):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.370):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata3
+epprd_rg:clvaryonvg(2.370):datavg[updatefs:637] : CuAt label attribute for sapdata3lv
+epprd_rg:clvaryonvg(2.370):datavg[updatefs:639] clodmget -q 'name = sapdata3lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.374):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata3
+epprd_rg:clvaryonvg(2.375):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.377):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:657] [[ /oracle/EPP/sapdata3 == None ]]
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:665] [[ /oracle/EPP/sapdata3 == /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:665] [[ /oracle/EPP/sapdata3 != /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:685] [[ /oracle/EPP/sapdata3 != /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:600] clodmget -q 'name = sapdata4lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.383):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.383):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.383):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata4lv
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.401):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata4lv
+epprd_rg:clvaryonvg(2.402):datavg[updatefs:625] getlvcb -T -A sapdata4lv
+epprd_rg:clvaryonvg(2.402):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.405):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.408):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.410):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.423):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.423):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.423):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata4lv
+epprd_rg:clvaryonvg(2.425):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata4lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.427):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.429):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.432):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata4
+epprd_rg:clvaryonvg(2.432):datavg[updatefs:637] : CuAt label attribute for sapdata4lv
+epprd_rg:clvaryonvg(2.432):datavg[updatefs:639] clodmget -q 'name = sapdata4lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.435):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata4
+epprd_rg:clvaryonvg(2.437):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.438):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.441):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:657] [[ /oracle/EPP/sapdata4 == None ]]
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:665] [[ /oracle/EPP/sapdata4 == /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:665] [[ /oracle/EPP/sapdata4 != /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:685] [[ /oracle/EPP/sapdata4 != /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:600] clodmget -q 'name = boardlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.445):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.445):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.445):datavg[updatefs:607] /usr/sbin/getlvcb -f boardlv
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.463):datavg[updatefs:623] : Label and file system type from LVCB on disk for boardlv
+epprd_rg:clvaryonvg(2.464):datavg[updatefs:625] getlvcb -T -A boardlv
+epprd_rg:clvaryonvg(2.464):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.467):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.470):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.472):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.485):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.485):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.485):datavg[updatefs:632] : Mount point in /etc/filesystems for boardlv
+epprd_rg:clvaryonvg(2.486):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/boardlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.489):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.490):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.494):datavg[updatefs:634] fs_mount_point=/board_org
+epprd_rg:clvaryonvg(2.494):datavg[updatefs:637] : CuAt label attribute for boardlv
+epprd_rg:clvaryonvg(2.494):datavg[updatefs:639] clodmget -q 'name = boardlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.497):datavg[updatefs:639] CuAt_label=/board_org
+epprd_rg:clvaryonvg(2.499):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.500):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:657] [[ -z /board_org ]]
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:657] [[ /board_org == None ]]
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:665] [[ /board_org == /board_org ]]
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:665] [[ /board_org != /board_org ]]
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:685] [[ /board_org != /board_org ]]
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.503):datavg[updatefs:600] clodmget -q 'name = origlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.507):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.507):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.507):datavg[updatefs:607] /usr/sbin/getlvcb -f origlogAlv
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:623] : Label and file system type from LVCB on disk for origlogAlv
+epprd_rg:clvaryonvg(2.525):datavg[updatefs:625] getlvcb -T -A origlogAlv
+epprd_rg:clvaryonvg(2.525):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.529):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.532):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.534):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.546):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.546):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.546):datavg[updatefs:632] : Mount point in /etc/filesystems for origlogAlv
+epprd_rg:clvaryonvg(2.548):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/origlogAlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.550):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.551):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.555):datavg[updatefs:634] fs_mount_point=/oracle/EPP/origlogA
+epprd_rg:clvaryonvg(2.555):datavg[updatefs:637] : CuAt label attribute for origlogAlv
+epprd_rg:clvaryonvg(2.556):datavg[updatefs:639] clodmget -q 'name = origlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.559):datavg[updatefs:639] CuAt_label=/oracle/EPP/origlogA
+epprd_rg:clvaryonvg(2.560):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.561):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:657] [[ -z /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:657] [[ /oracle/EPP/origlogA == None ]]
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:665] [[ /oracle/EPP/origlogA == /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:665] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:685] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.565):datavg[updatefs:600] clodmget -q 'name = origlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.568):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.568):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.569):datavg[updatefs:607] /usr/sbin/getlvcb -f origlogBlv
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.586):datavg[updatefs:623] : Label and file system type from LVCB on disk for origlogBlv
+epprd_rg:clvaryonvg(2.587):datavg[updatefs:625] getlvcb -T -A origlogBlv
+epprd_rg:clvaryonvg(2.587):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.590):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.593):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.595):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.608):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.608):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.608):datavg[updatefs:632] : Mount point in /etc/filesystems for origlogBlv
+epprd_rg:clvaryonvg(2.609):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/origlogBlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.612):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.613):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.617):datavg[updatefs:634] fs_mount_point=/oracle/EPP/origlogB
+epprd_rg:clvaryonvg(2.617):datavg[updatefs:637] : CuAt label attribute for origlogBlv
+epprd_rg:clvaryonvg(2.617):datavg[updatefs:639] clodmget -q 'name = origlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.620):datavg[updatefs:639] CuAt_label=/oracle/EPP/origlogB
+epprd_rg:clvaryonvg(2.621):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.623):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:657] [[ -z /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:657] [[ /oracle/EPP/origlogB == None ]]
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:665] [[ /oracle/EPP/origlogB == /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:665] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:685] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.626):datavg[updatefs:600] clodmget -q 'name = mirrlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.629):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.629):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.629):datavg[updatefs:607] /usr/sbin/getlvcb -f mirrlogAlv
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.647):datavg[updatefs:623] : Label and file system type from LVCB on disk for mirrlogAlv
+epprd_rg:clvaryonvg(2.648):datavg[updatefs:625] getlvcb -T -A mirrlogAlv
+epprd_rg:clvaryonvg(2.648):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.651):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.654):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.656):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.672):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.672):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.672):datavg[updatefs:632] : Mount point in /etc/filesystems for mirrlogAlv
+epprd_rg:clvaryonvg(2.674):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/mirrlogAlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.676):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.678):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.681):datavg[updatefs:634] fs_mount_point=/oracle/EPP/mirrlogA
+epprd_rg:clvaryonvg(2.681):datavg[updatefs:637] : CuAt label attribute for mirrlogAlv
+epprd_rg:clvaryonvg(2.681):datavg[updatefs:639] clodmget -q 'name = mirrlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.684):datavg[updatefs:639] CuAt_label=/oracle/EPP/mirrlogA
+epprd_rg:clvaryonvg(2.686):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.687):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:657] [[ -z /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:657] [[ /oracle/EPP/mirrlogA == None ]]
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:665] [[ /oracle/EPP/mirrlogA == /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:665] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:685] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.690):datavg[updatefs:600] clodmget -q 'name = mirrlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.694):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.694):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.694):datavg[updatefs:607] /usr/sbin/getlvcb -f mirrlogBlv
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false '
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false ' ]]
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.711):datavg[updatefs:623] : Label and file system type from LVCB on disk for mirrlogBlv
+epprd_rg:clvaryonvg(2.712):datavg[updatefs:625] getlvcb -T -A mirrlogBlv
+epprd_rg:clvaryonvg(2.713):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.716):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.719):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.721):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.734):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.734):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.734):datavg[updatefs:632] : Mount point in /etc/filesystems for mirrlogBlv
+epprd_rg:clvaryonvg(2.735):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/mirrlogBlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.738):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.739):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.743):datavg[updatefs:634] fs_mount_point=/oracle/EPP/mirrlogB
+epprd_rg:clvaryonvg(2.743):datavg[updatefs:637] : CuAt label attribute for mirrlogBlv
+epprd_rg:clvaryonvg(2.743):datavg[updatefs:639] clodmget -q 'name = mirrlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.746):datavg[updatefs:639] CuAt_label=/oracle/EPP/mirrlogB
+epprd_rg:clvaryonvg(2.748):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.749):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:657] [[ -z /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:657] [[ /oracle/EPP/mirrlogB == None ]]
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:665] [[ /oracle/EPP/mirrlogB == /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:665] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:685] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.752):datavg[updatefs:600] clodmget -q 'name = epprdaloglv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.756):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.756):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.756):datavg[updatefs:607] /usr/sbin/getlvcb -f epprdaloglv
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:607] fs_info=' '
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:618] [[ -z ' ' ]]
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:618] [[ ' ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.773):datavg[updatefs:620] continue
+epprd_rg:clvaryonvg(2.773):datavg[1641] : At this point, the volume should be varied on, so get the current
+epprd_rg:clvaryonvg(2.773):datavg[1642] : timestamp if needed
+epprd_rg:clvaryonvg(2.773):datavg[1644] vgdatimestamps
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:201] PS4_FUNC=vgdatimestamps
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:201] typeset PS4_FUNC
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:202] [[ high == high ]]
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:202] set -x
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:203] set -u
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:206] : See what timestamp LVM has recorded from the last time it checked
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:207] : the disks
+epprd_rg:clvaryonvg(2.773):datavg[vgdatimestamps:209] /usr/sbin/getlvodm -T 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(2.774):datavg[vgdatimestamps:209] 2> /dev/null
+epprd_rg:clvaryonvg(2.777):datavg[vgdatimestamps:209] TS_FROM_ODM=63d4d87b2421bec0
+epprd_rg:clvaryonvg(2.777):datavg[vgdatimestamps:212] : Check to see if HACMP is maintaining a timestamp for this volume group
+epprd_rg:clvaryonvg(2.777):datavg[vgdatimestamps:213] : Needed for some older volume groups
+epprd_rg:clvaryonvg(2.777):datavg[vgdatimestamps:215] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.tstamp ]]
+epprd_rg:clvaryonvg(2.777):datavg[vgdatimestamps:234] : Get the time stamp from the actual disk
+epprd_rg:clvaryonvg(2.777):datavg[vgdatimestamps:236] clvgdats /dev/datavg
+epprd_rg:clvaryonvg(2.777):datavg[vgdatimestamps:236] 2> /dev/null
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:236] TS_FROM_DISK=63d4d87b2421bec0
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:237] clvgdats_rc=0
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:238] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:247] [[ -z 63d4d87b2421bec0 ]]
+epprd_rg:clvaryonvg(2.787):datavg[1645] [[ -z 63d4d87b2421bec0 ]]
+epprd_rg:clvaryonvg(2.787):datavg[1656] : Finally, leave the volume in the requested state - on or off
+epprd_rg:clvaryonvg(2.787):datavg[1658] [[ FALSE == TRUE ]]
+epprd_rg:clvaryonvg(2.787):datavg[1665] (( 0 == 0 ))
+epprd_rg:clvaryonvg(2.787):datavg[1668] : Synchronize time stamps globally
+epprd_rg:clvaryonvg(2.787):datavg[1670] cl_update_vg_odm_ts -o datavg
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[77] version=1.13
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[121] o_flag=''
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[122] f_flag=''
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[123] getopts :of option
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[126] : Local timestamps should be good, since volume group was
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[127] : just varyied on or off
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[128] o_flag=TRUE
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[123] getopts :of option
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[142] shift 1
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[144] vg_name=datavg
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[145] [[ -z datavg ]]
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[151] shift
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[152] node_list=''
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[153] /usr/es/sbin/cluster/utilities/cl_get_path all
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[153] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[155] [[ -z '' ]]
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[158] : Check to see if this update is necessary - some LVM levels automatically
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[159] : update volume group timestamps clusterwide.
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[163] instfix -iqk IV74100
+epprd_rg:cl_update_vg_odm_ts(0.005):datavg[163] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.012):datavg[164] instfix -iqk IV74883
+epprd_rg:cl_update_vg_odm_ts(0.012):datavg[164] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.019):datavg[165] instfix -iqk IV74698
+epprd_rg:cl_update_vg_odm_ts(0.020):datavg[165] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.026):datavg[166] instfix -iqk IV74246
+epprd_rg:cl_update_vg_odm_ts(0.027):datavg[166] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.034):datavg[174] emgr -l -L IV74883
+epprd_rg:cl_update_vg_odm_ts(0.035):datavg[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.302):datavg[174] emgr -l -L IV74698
+epprd_rg:cl_update_vg_odm_ts(0.303):datavg[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.571):datavg[174] emgr -l -L IV74246
+epprd_rg:cl_update_vg_odm_ts(0.571):datavg[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.840):datavg[183] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[184] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[185] : 99.99.999.999
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[187] typeset -li V R M F
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[188] typeset -Z2 V
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[189] typeset -Z2 R
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[190] typeset -Z3 M
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[191] typeset -Z3 F
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[192] lvm_lvl6=601008015
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[192] typeset -li lvm_lvl6
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[194] lvm_lvl7=701003046
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[194] typeset -li lvm_lvl7
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[195] VRMF=0
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[195] typeset -li VRMF
+epprd_rg:cl_update_vg_odm_ts(0.840):datavg[198] : Here try and figure out what level of LVM is installed
+epprd_rg:cl_update_vg_odm_ts(0.841):datavg[200] lslpp -lcqOr bos.rte.lvm
+epprd_rg:cl_update_vg_odm_ts(0.844):datavg[200] cut -f3 -d:
+epprd_rg:cl_update_vg_odm_ts(0.845):datavg[200] read V R M F
+epprd_rg:cl_update_vg_odm_ts(0.846):datavg[200] IFS=.
+epprd_rg:cl_update_vg_odm_ts(0.846):datavg[201] VRMF=0702005101
+epprd_rg:cl_update_vg_odm_ts(0.846):datavg[203] (( 7 == 6 && 702005101 >= 601008015 ))
+epprd_rg:cl_update_vg_odm_ts(0.846):datavg[204] (( 702005101 >= 701003046 ))
+epprd_rg:cl_update_vg_odm_ts(0.846):datavg[207] : LVM at a level in which timestamp update is unnecessary
+epprd_rg:cl_update_vg_odm_ts(0.846):datavg[209] return 0
+epprd_rg:clvaryonvg(3.637):datavg[1674] : On successful varyon, clean up any files used to track errors with
+epprd_rg:clvaryonvg(3.637):datavg[1675] : this volume group
+epprd_rg:clvaryonvg(3.637):datavg[1677] rm -f /usr/es/sbin/cluster/etc/vg/datavg.desc /usr/es/sbin/cluster/etc/vg/datavg.replay /usr/es/sbin/cluster/etc/vg/datavg.perms /usr/es/sbin/cluster/etc/vg/datavg.tstamp /usr/es/sbin/cluster/etc/vg/datavg.fail
+epprd_rg:clvaryonvg(3.639):datavg[1680] : Note that a sync has not been done on the volume group at this point.
+epprd_rg:clvaryonvg(3.639):datavg[1681] : A sync is kicked off in cl_sync_vgs, once any filesystem mounts are
+epprd_rg:clvaryonvg(3.639):datavg[1682] : complete. A sync at this time would interfere with the mounts
+epprd_rg:clvaryonvg(3.639):datavg[1685] return 0
+epprd_rg:cl_activate_vgs(3.724):datavg[vgs_chk:103] ERRMSG=$'cl_set_vg_fence_height[126]: version @(#)10\t1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37\ncl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)\ncl_set_vg_fence_height[214]: read(datavg, 16)\ncl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)\ncl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0))'
+epprd_rg:cl_activate_vgs(3.725):datavg[vgs_chk:104] RC=0
+epprd_rg:cl_activate_vgs(3.725):datavg[vgs_chk:107] (( 0 == 1 || 0 == 20 ))
+epprd_rg:cl_activate_vgs(3.725):datavg[vgs_chk:115] : exit status of clvaryonvg -n datavg: 0
+epprd_rg:cl_activate_vgs(3.725):datavg[vgs_chk:117] [[ -n $'cl_set_vg_fence_height[126]: version @(#)10\t1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37\ncl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)\ncl_set_vg_fence_height[214]: read(datavg, 16)\ncl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)\ncl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0))' ]]
+epprd_rg:cl_activate_vgs(3.725):datavg[vgs_chk:117] (( 0 != 1 ))
+epprd_rg:cl_activate_vgs(3.725):datavg[vgs_chk:119] cl_echo 286 $'cl_activate_vgs: Successful clvaryonvg of datavg with message cl_set_vg_fence_height[126]: version @(#)10\t1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37\ncl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)\ncl_set_vg_fence_height[214]: read(datavg, 16)\ncl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)\ncl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0)).' cl_activate_vgs datavg 'cl_set_vg_fence_height[126]:' version '@(#)10' 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37 'cl_set_vg_fence_height[180]:' 'open(/usr/es/sbin/cluster/etc/vg/datavg.uuid,' 'O_RDONLY)' 'cl_set_vg_fence_height[214]:' 'read(datavg,' '16)' 'cl_set_vg_fence_height[237]:' 'close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)' 'cl_set_vg_fence_height[265]:' 'sfwSetFenceGroup(vg=datavg' uuid=ec2db4422261eae02091227fb9e53c88 height='rw(0))'
Jan 28 2023 17:10:39cl_activate_vgs: Successful clvaryonvg of datavg with message cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0)).+epprd_rg:cl_activate_vgs(3.743):datavg[vgs_chk:123] [[ 0 != 0 ]]
+epprd_rg:cl_activate_vgs(3.743):datavg[vgs_chk:127] amlog_trace '' 'Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(3.743):datavg[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_vgs(3.744):datavg[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_vgs(3.769):datavg[amlog_trace:319] cltime
+epprd_rg:cl_activate_vgs(3.771):datavg[amlog_trace:319] DATE=2023-01-28T17:10:39.088403
+epprd_rg:cl_activate_vgs(3.771):datavg[amlog_trace:320] echo '|2023-01-28T17:10:39.088403|INFO: Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(3.771):datavg[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_vgs(3.771):datavg[vgs_chk:132] echo datavg 0
+epprd_rg:cl_activate_vgs(3.771):datavg[vgs_chk:132] 1>> /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs(3.771):datavg[vgs_chk:133] return 0
+epprd_rg:cl_activate_vgs:datavg[vgs_list:198] unset PS4_LOOP PS4_TIMER
+epprd_rg:cl_activate_vgs[304] wait
+epprd_rg:cl_activate_vgs[310] ALLNOERRVGS=All_nonerror_volume_groups
+epprd_rg:cl_activate_vgs[311] cl_RMupdate resource_up All_nonerror_volume_groups cl_activate_vgs
2023-01-28T17:10:39.112256
2023-01-28T17:10:39.116473
+epprd_rg:cl_activate_vgs[318] [[ -f /tmp/_activate_vgs.tmp ]]
+epprd_rg:cl_activate_vgs[320] grep ' 1' /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[329] rm -f /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[332] exit 0
+epprd_rg:process_resources[process_volume_groups:2584] RC=0
+epprd_rg:process_resources[process_volume_groups:2585] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[process_volume_groups:2598] (( 0 != 0 ))
+epprd_rg:process_resources[process_volume_groups:2627] return 0
+epprd_rg:process_resources[process_volume_groups_main:2556] STAT=0
+epprd_rg:process_resources[process_volume_groups_main:2559] return 0
+epprd_rg:process_resources[3572] RC=0
+epprd_rg:process_resources[3573] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:39.134184 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=LOGREDO ACTION=ACQUIRE VOLUME_GROUPS='"datavg"' RESOURCE_GROUPS='"epprd_rg' '"'
+epprd_rg:process_resources[1] JOB_TYPE=LOGREDO
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ LOGREDO == RELEASE ]]
+epprd_rg:process_resources[3360] [[ LOGREDO == ONLINE ]]
+epprd_rg:process_resources[3634] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[3635] logredo_volume_groups
+epprd_rg:process_resources[logredo_volume_groups:2745] PS4_FUNC=logredo_volume_groups
+epprd_rg:process_resources[logredo_volume_groups:2745] typeset PS4_FUNC
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2746] PS4_TIMER=true
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2746] typeset PS4_TIMER
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2747] [[ high == high ]]
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2747] set -x
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2749] TMP_FILE=/var/hacmp/log/.process_resources_logredo.23593416
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2749] export TMP_FILE
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2750] rm -f '/var/hacmp/log/.process_resources_logredo*'
+epprd_rg:process_resources(4.797)[logredo_volume_groups:2752] STAT=0
+epprd_rg:process_resources(4.797)[logredo_volume_groups:2755] export GROUPNAME
+epprd_rg:process_resources(4.798)[logredo_volume_groups:2757] get_list_head datavg
+epprd_rg:process_resources(4.798)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(4.798)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(4.798)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(4.798)[get_list_head:60] set -x
+epprd_rg:process_resources(4.799)[get_list_head:61] echo datavg
+epprd_rg:process_resources(4.801)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(4.801)[get_list_head:61] IFS=:
+epprd_rg:process_resources(4.802)[get_list_head:62] echo datavg
+epprd_rg:process_resources(4.803)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(4.801)[logredo_volume_groups:2757] read LIST_OF_VOLUME_GROUPS_FOR_RG
+epprd_rg:process_resources(4.806)[logredo_volume_groups:2758] get_list_tail datavg
+epprd_rg:process_resources(4.806)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(4.807)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(4.807)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(4.807)[get_list_tail:68] set -x
+epprd_rg:process_resources(4.808)[get_list_tail:69] echo datavg
+epprd_rg:process_resources(4.811)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(4.811)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(4.811)[get_list_tail:70] echo
+epprd_rg:process_resources(4.811)[logredo_volume_groups:2758] read VOLUME_GROUPS
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2761] : Run logredo on all JFS/JFS2 log devices to assure FS consistency
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2763] ALL_LVs=''
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2764] lv_all=''
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2765] mount_fs=''
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2766] fsck_check=''
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2767] MOUNTGUARD=''
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2768] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.813)[logredo_volume_groups:2769] FMMOUNT=''
+epprd_rg:process_resources(4.815)[logredo_volume_groups:2772] tail +3
+epprd_rg:process_resources(4.814)[logredo_volume_groups:2772] lsvg -lL datavg
+epprd_rg:process_resources(4.814)[logredo_volume_groups:2772] LC_ALL=C
+epprd_rg:process_resources(4.815)[logredo_volume_groups:2772] 1>> /var/hacmp/log/.process_resources_logredo.23593416
+epprd_rg:process_resources(4.838)[logredo_volume_groups:2774] cat /var/hacmp/log/.process_resources_logredo.23593416
+epprd_rg:process_resources(4.841)[logredo_volume_groups:2774] awk '{print $1}'
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2774] ALL_LVs=$'epprdaloglv\nsaplv\nsapmntlv\noraclelv\nepplv\noraarchlv\nsapdata1lv\nsapdata2lv\nsapdata3lv\nsapdata4lv\nboardlv\noriglogAlv\noriglogBlv\nmirrlogAlv\nmirrlogBlv'
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2777] : Verify if any of the file system associated with volume group datavg
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2778] : is already mounted anywhere else in the cluster.
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2779] : If it is already mounted somewhere else, we dont want to continue
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2780] : here to avoid data corruption.
+epprd_rg:process_resources(4.847)[logredo_volume_groups:2782] cat /var/hacmp/log/.process_resources_logredo.23593416
+epprd_rg:process_resources(4.850)[logredo_volume_groups:2782] grep -v N/A
+epprd_rg:process_resources(4.852)[logredo_volume_groups:2782] awk '{print $1}'
+epprd_rg:process_resources(4.857)[logredo_volume_groups:2782] lv_all=$'saplv\nsapmntlv\noraclelv\nepplv\noraarchlv\nsapdata1lv\nsapdata2lv\nsapdata3lv\nsapdata4lv\nboardlv\noriglogAlv\noriglogBlv\nmirrlogAlv\nmirrlogBlv'
+epprd_rg:process_resources(4.857)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.857)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.859)[logredo_volume_groups:2789] lsfs -qc saplv
+epprd_rg:process_resources(4.859)[logredo_volume_groups:2789] LC_ALL=C
lsfs: No record matching '/var/hacmp/saplv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.862)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.865)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.867)[logredo_volume_groups:2789] grep -w MountGuard
+epprd_rg:process_resources(4.870)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.870)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.870)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.870)[logredo_volume_groups:2795] fsdb saplv
+epprd_rg:process_resources(4.871)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.876)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.878)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.880)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.882)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.887)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.887)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.887)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.887)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.887)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.889)[logredo_volume_groups:2789] lsfs -qc sapmntlv
+epprd_rg:process_resources(4.889)[logredo_volume_groups:2789] LC_ALL=C
lsfs: No record matching '/var/hacmp/sapmntlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.892)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.895)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.897)[logredo_volume_groups:2789] grep -w MountGuard
+epprd_rg:process_resources(4.900)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.901)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.901)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.901)[logredo_volume_groups:2795] fsdb sapmntlv
+epprd_rg:process_resources(4.902)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.905)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.907)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.909)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.911)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.916)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.916)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.916)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.916)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.916)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.918)[logredo_volume_groups:2789] lsfs -qc oraclelv
+epprd_rg:process_resources(4.918)[logredo_volume_groups:2789] LC_ALL=C
lsfs: No record matching '/var/hacmp/oraclelv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.921)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.923)[logredo_volume_groups:2789] grep -w MountGuard
+epprd_rg:process_resources(4.924)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.929)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.929)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.929)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.929)[logredo_volume_groups:2795] fsdb oraclelv
+epprd_rg:process_resources(4.930)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.933)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.935)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.937)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.939)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.946)[logredo_volume_groups:2789] lsfs -qc epplv
+epprd_rg:process_resources(4.946)[logredo_volume_groups:2789] LC_ALL=C
lsfs: No record matching '/var/hacmp/epplv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.949)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.951)[logredo_volume_groups:2789] grep -w MountGuard
+epprd_rg:process_resources(4.953)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.957)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.957)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.957)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.957)[logredo_volume_groups:2795] fsdb epplv
+epprd_rg:process_resources(4.958)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.962)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.964)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.966)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.968)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.973)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.973)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.973)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.973)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.973)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.975)[logredo_volume_groups:2789] lsfs -qc oraarchlv
+epprd_rg:process_resources(4.975)[logredo_volume_groups:2789] LC_ALL=C
lsfs: No record matching '/var/hacmp/oraarchlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.978)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.980)[logredo_volume_groups:2789] grep -w MountGuard
+epprd_rg:process_resources(4.982)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.986)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.986)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.986)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.986)[logredo_volume_groups:2795] fsdb oraarchlv
+epprd_rg:process_resources(4.987)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.991)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.993)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.995)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.997)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.002)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.002)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.002)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.002)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.002)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.003)[logredo_volume_groups:2789] lsfs -qc sapdata1lv
+epprd_rg:process_resources(5.004)[logredo_volume_groups:2789] LC_ALL=C
lsfs: No record matching '/var/hacmp/sapdata1lv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.007)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.009)[logredo_volume_groups:2789] grep -w MountGuard
+epprd_rg:process_resources(5.011)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.015)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.015)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.015)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.015)[logredo_volume_groups:2795] fsdb sapdata1lv
+epprd_rg:process_resources(5.016)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.019)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.021)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.023)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.024)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.030)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.030)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.030)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.030)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.030)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.032)[logredo_volume_groups:2789] lsfs -qc sapdata2lv
+epprd_rg:process_resources(5.032)[logredo_volume_groups:2789] LC_ALL=C
lsfs: No record matching '/var/hacmp/sapdata2lv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.035)[logredo_volume_groups:2789] grep -w MountGuard
+epprd_rg:process_resources(5.037)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.039)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.043)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.043)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.043)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.043)[logredo_volume_groups:2795] fsdb sapdata2lv
+epprd_rg:process_resources(5.044)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.047)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.048)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.050)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.052)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.057)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.057)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.057)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.057)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.057)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.059)[logredo_volume_groups:2789] lsfs -qc sapdata3lv
+epprd_rg:process_resources(5.060)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(5.060)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.061)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/sapdata3lv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.062)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.066)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.066)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.066)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.066)[logredo_volume_groups:2795] fsdb sapdata3lv
+epprd_rg:process_resources(5.067)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.070)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.072)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.073)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.073)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.078)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.078)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.078)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.078)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.078)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.080)[logredo_volume_groups:2789] lsfs -qc sapdata4lv
+epprd_rg:process_resources(5.081)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(5.081)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.082)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/sapdata4lv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.083)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.087)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.087)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.087)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.087)[logredo_volume_groups:2795] fsdb sapdata4lv
+epprd_rg:process_resources(5.088)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.091)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.094)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.094)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.094)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.099)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.099)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.099)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.100)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.100)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.102)[logredo_volume_groups:2789] lsfs -qc boardlv
+epprd_rg:process_resources(5.102)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(5.102)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.103)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/boardlv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.104)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.108)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.108)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.108)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.108)[logredo_volume_groups:2795] fsdb boardlv
+epprd_rg:process_resources(5.109)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.112)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.114)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.115)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.115)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.120)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.120)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.120)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.121)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.121)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.123)[logredo_volume_groups:2789] lsfs -qc origlogAlv
+epprd_rg:process_resources(5.123)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(5.123)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.124)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/origlogAlv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.125)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.129)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.129)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.129)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.129)[logredo_volume_groups:2795] fsdb origlogAlv
+epprd_rg:process_resources(5.130)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.134)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.136)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.136)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.136)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.142)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.142)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.142)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.142)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.142)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.144)[logredo_volume_groups:2789] lsfs -qc origlogBlv
+epprd_rg:process_resources(5.144)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(5.145)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.145)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/origlogBlv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.146)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.150)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.150)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.150)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.151)[logredo_volume_groups:2795] fsdb origlogBlv
+epprd_rg:process_resources(5.152)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.155)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.157)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.157)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.158)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.163)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.163)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.163)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.163)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.163)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.165)[logredo_volume_groups:2789] lsfs -qc mirrlogAlv
+epprd_rg:process_resources(5.165)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(5.166)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.166)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/mirrlogAlv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.167)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.172)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.172)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.172)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.172)[logredo_volume_groups:2795] fsdb mirrlogAlv
+epprd_rg:process_resources(5.173)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.176)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.178)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.179)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.179)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(5.186)[logredo_volume_groups:2789] lsfs -qc mirrlogBlv
+epprd_rg:process_resources(5.186)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(5.187)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(5.187)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/mirrlogBlv' was found in /etc/filesystems.
+epprd_rg:process_resources(5.189)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(5.193)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(5.193)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(5.193)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(5.193)[logredo_volume_groups:2795] fsdb mirrlogBlv
+epprd_rg:process_resources(5.194)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(5.197)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(5.199)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(5.200)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(5.200)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2814] comm_failure=''
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2815] rc_mount=''
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2816] [[ -n '' ]]
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2851] logdevs=''
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2852] HAVE_GEO=''
+epprd_rg:process_resources(5.205)[logredo_volume_groups:2853] lslpp -l 'hageo.*'
+epprd_rg:process_resources(5.206)[logredo_volume_groups:2853] 1> /dev/null 2>& 1
+epprd_rg:process_resources(5.209)[logredo_volume_groups:2854] lslpp -l 'geoRM.*'
+epprd_rg:process_resources(5.210)[logredo_volume_groups:2854] 1> /dev/null 2>& 1
+epprd_rg:process_resources(5.213)[logredo_volume_groups:2874] pattern='jfs*log'
+epprd_rg:process_resources(5.213)[logredo_volume_groups:2876] : Any device with the type as log should be added
+epprd_rg:process_resources(5.213)[logredo_volume_groups:2882] odmget -q $'name = epprdaloglv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.217)[logredo_volume_groups:2882] [[ -n $'\nCuAt:\n\tname = "epprdaloglv"\n\tattribute = "type"\n\tvalue = "jfs2log"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.217)[logredo_volume_groups:2884] logdevs=' /dev/epprdaloglv'
+epprd_rg:process_resources(5.217)[logredo_volume_groups:2882] odmget -q $'name = saplv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.220)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.220)[logredo_volume_groups:2882] odmget -q $'name = sapmntlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.224)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.224)[logredo_volume_groups:2882] odmget -q $'name = oraclelv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.227)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.227)[logredo_volume_groups:2882] odmget -q $'name = epplv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.231)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.231)[logredo_volume_groups:2882] odmget -q $'name = oraarchlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.235)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.235)[logredo_volume_groups:2882] odmget -q $'name = sapdata1lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.238)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.238)[logredo_volume_groups:2882] odmget -q $'name = sapdata2lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.242)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.242)[logredo_volume_groups:2882] odmget -q $'name = sapdata3lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.245)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.245)[logredo_volume_groups:2882] odmget -q $'name = sapdata4lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.249)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.249)[logredo_volume_groups:2882] odmget -q $'name = boardlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.252)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.252)[logredo_volume_groups:2882] odmget -q $'name = origlogAlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.256)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.256)[logredo_volume_groups:2882] odmget -q $'name = origlogBlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.259)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.260)[logredo_volume_groups:2882] odmget -q $'name = mirrlogAlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.263)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.263)[logredo_volume_groups:2882] odmget -q $'name = mirrlogBlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.267)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.267)[logredo_volume_groups:2889] : JFS2 file systems can have inline logs where the log LV is the same as the FS LV.
+epprd_rg:process_resources(5.267)[logredo_volume_groups:2895] odmget $'-qname = epprdaloglv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.270)[logredo_volume_groups:2895] [[ -n '' ]]
+epprd_rg:process_resources(5.270)[logredo_volume_groups:2895] odmget $'-qname = saplv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.274)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "saplv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.276)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.276)[logredo_volume_groups:2898] odmget -q 'name = saplv and attribute = label' CuAt
+epprd_rg:process_resources(5.280)[logredo_volume_groups:2898] [[ -n /usr/sap ]]
+epprd_rg:process_resources(5.282)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.282)[logredo_volume_groups:2900] grep -wp /dev/saplv /etc/filesystems
+epprd_rg:process_resources(5.287)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.287)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.287)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/saplv ]]
+epprd_rg:process_resources(5.287)[logredo_volume_groups:2895] odmget $'-qname = sapmntlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.291)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapmntlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.293)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.293)[logredo_volume_groups:2898] odmget -q 'name = sapmntlv and attribute = label' CuAt
+epprd_rg:process_resources(5.297)[logredo_volume_groups:2898] [[ -n /sapmnt ]]
+epprd_rg:process_resources(5.299)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.299)[logredo_volume_groups:2900] grep -wp /dev/sapmntlv /etc/filesystems
+epprd_rg:process_resources(5.305)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.305)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.305)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapmntlv ]]
+epprd_rg:process_resources(5.305)[logredo_volume_groups:2895] odmget $'-qname = oraclelv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.308)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "oraclelv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.310)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.310)[logredo_volume_groups:2898] odmget -q 'name = oraclelv and attribute = label' CuAt
+epprd_rg:process_resources(5.315)[logredo_volume_groups:2898] [[ -n /oracle ]]
+epprd_rg:process_resources(5.317)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.317)[logredo_volume_groups:2900] grep -wp /dev/oraclelv /etc/filesystems
+epprd_rg:process_resources(5.322)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.322)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.322)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/oraclelv ]]
+epprd_rg:process_resources(5.322)[logredo_volume_groups:2895] odmget $'-qname = epplv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.326)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "epplv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.328)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.328)[logredo_volume_groups:2898] odmget -q 'name = epplv and attribute = label' CuAt
+epprd_rg:process_resources(5.332)[logredo_volume_groups:2898] [[ -n /oracle/EPP ]]
+epprd_rg:process_resources(5.334)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.334)[logredo_volume_groups:2900] grep -wp /dev/epplv /etc/filesystems
+epprd_rg:process_resources(5.339)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.340)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.340)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/epplv ]]
+epprd_rg:process_resources(5.340)[logredo_volume_groups:2895] odmget $'-qname = oraarchlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.343)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "oraarchlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.345)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.345)[logredo_volume_groups:2898] odmget -q 'name = oraarchlv and attribute = label' CuAt
+epprd_rg:process_resources(5.350)[logredo_volume_groups:2898] [[ -n /oracle/EPP/oraarch ]]
+epprd_rg:process_resources(5.352)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.352)[logredo_volume_groups:2900] grep -wp /dev/oraarchlv /etc/filesystems
+epprd_rg:process_resources(5.357)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.357)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.357)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/oraarchlv ]]
+epprd_rg:process_resources(5.357)[logredo_volume_groups:2895] odmget $'-qname = sapdata1lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.361)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata1lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.363)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.363)[logredo_volume_groups:2898] odmget -q 'name = sapdata1lv and attribute = label' CuAt
+epprd_rg:process_resources(5.367)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata1 ]]
+epprd_rg:process_resources(5.369)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.369)[logredo_volume_groups:2900] grep -wp /dev/sapdata1lv /etc/filesystems
+epprd_rg:process_resources(5.374)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.374)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.374)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata1lv ]]
+epprd_rg:process_resources(5.374)[logredo_volume_groups:2895] odmget $'-qname = sapdata2lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.378)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata2lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.380)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.380)[logredo_volume_groups:2898] odmget -q 'name = sapdata2lv and attribute = label' CuAt
+epprd_rg:process_resources(5.384)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata2 ]]
+epprd_rg:process_resources(5.387)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.387)[logredo_volume_groups:2900] grep -wp /dev/sapdata2lv /etc/filesystems
+epprd_rg:process_resources(5.392)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.392)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.392)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata2lv ]]
+epprd_rg:process_resources(5.392)[logredo_volume_groups:2895] odmget $'-qname = sapdata3lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.396)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata3lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.397)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.398)[logredo_volume_groups:2898] odmget -q 'name = sapdata3lv and attribute = label' CuAt
+epprd_rg:process_resources(5.402)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata3 ]]
+epprd_rg:process_resources(5.404)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.404)[logredo_volume_groups:2900] grep -wp /dev/sapdata3lv /etc/filesystems
+epprd_rg:process_resources(5.409)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.409)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.409)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata3lv ]]
+epprd_rg:process_resources(5.409)[logredo_volume_groups:2895] odmget $'-qname = sapdata4lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.413)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata4lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.415)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.415)[logredo_volume_groups:2898] odmget -q 'name = sapdata4lv and attribute = label' CuAt
+epprd_rg:process_resources(5.419)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata4 ]]
+epprd_rg:process_resources(5.421)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.421)[logredo_volume_groups:2900] grep -wp /dev/sapdata4lv /etc/filesystems
+epprd_rg:process_resources(5.427)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.427)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.427)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata4lv ]]
+epprd_rg:process_resources(5.427)[logredo_volume_groups:2895] odmget $'-qname = boardlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.430)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "boardlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.432)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.432)[logredo_volume_groups:2898] odmget -q 'name = boardlv and attribute = label' CuAt
+epprd_rg:process_resources(5.437)[logredo_volume_groups:2898] [[ -n /board_org ]]
+epprd_rg:process_resources(5.439)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.439)[logredo_volume_groups:2900] grep -wp /dev/boardlv /etc/filesystems
+epprd_rg:process_resources(5.444)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.444)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.444)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/boardlv ]]
+epprd_rg:process_resources(5.444)[logredo_volume_groups:2895] odmget $'-qname = origlogAlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.448)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "origlogAlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.450)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.450)[logredo_volume_groups:2898] odmget -q 'name = origlogAlv and attribute = label' CuAt
+epprd_rg:process_resources(5.454)[logredo_volume_groups:2898] [[ -n /oracle/EPP/origlogA ]]
+epprd_rg:process_resources(5.456)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.457)[logredo_volume_groups:2900] grep -wp /dev/origlogAlv /etc/filesystems
+epprd_rg:process_resources(5.462)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.462)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.462)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/origlogAlv ]]
+epprd_rg:process_resources(5.462)[logredo_volume_groups:2895] odmget $'-qname = origlogBlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.466)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "origlogBlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.468)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.468)[logredo_volume_groups:2898] odmget -q 'name = origlogBlv and attribute = label' CuAt
+epprd_rg:process_resources(5.472)[logredo_volume_groups:2898] [[ -n /oracle/EPP/origlogB ]]
+epprd_rg:process_resources(5.474)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.474)[logredo_volume_groups:2900] grep -wp /dev/origlogBlv /etc/filesystems
+epprd_rg:process_resources(5.479)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.479)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.479)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/origlogBlv ]]
+epprd_rg:process_resources(5.479)[logredo_volume_groups:2895] odmget $'-qname = mirrlogAlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.483)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "mirrlogAlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.485)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.485)[logredo_volume_groups:2898] odmget -q 'name = mirrlogAlv and attribute = label' CuAt
+epprd_rg:process_resources(5.490)[logredo_volume_groups:2898] [[ -n /oracle/EPP/mirrlogA ]]
+epprd_rg:process_resources(5.492)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.492)[logredo_volume_groups:2900] grep -wp /dev/mirrlogAlv /etc/filesystems
+epprd_rg:process_resources(5.497)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.497)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.497)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/mirrlogAlv ]]
+epprd_rg:process_resources(5.497)[logredo_volume_groups:2895] odmget $'-qname = mirrlogBlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.501)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "mirrlogBlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.503)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.503)[logredo_volume_groups:2898] odmget -q 'name = mirrlogBlv and attribute = label' CuAt
+epprd_rg:process_resources(5.507)[logredo_volume_groups:2898] [[ -n /oracle/EPP/mirrlogB ]]
+epprd_rg:process_resources(5.509)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.509)[logredo_volume_groups:2900] grep -wp /dev/mirrlogBlv /etc/filesystems
+epprd_rg:process_resources(5.515)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.515)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.515)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/mirrlogBlv ]]
+epprd_rg:process_resources(5.515)[logredo_volume_groups:2910] : Remove any duplicates acquired so far
+epprd_rg:process_resources(5.517)[logredo_volume_groups:2912] echo /dev/epprdaloglv
+epprd_rg:process_resources(5.517)[logredo_volume_groups:2912] sort -u
+epprd_rg:process_resources(5.518)[logredo_volume_groups:2912] tr ' ' '\n'
+epprd_rg:process_resources(5.524)[logredo_volume_groups:2912] logdevs=/dev/epprdaloglv
+epprd_rg:process_resources(5.524)[logredo_volume_groups:2915] : Run logredos in parallel to save time.
+epprd_rg:process_resources(5.524)[logredo_volume_groups:2919] [[ -n '' ]]
+epprd_rg:process_resources(5.524)[logredo_volume_groups:2944] : Run logredo only if the LV is closed.
+epprd_rg:process_resources(5.524)[logredo_volume_groups:2946] awk '$1 ~ /^epprdaloglv$/ && $6 ~ /closed\// {print "CLOSED"}' /var/hacmp/log/.process_resources_logredo.23593416
+epprd_rg:process_resources(5.528)[logredo_volume_groups:2946] [[ -n CLOSED ]]
+epprd_rg:process_resources(5.528)[logredo_volume_groups:2949] : Run logredo only if filesystem is not mounted on any of the node in the cluster.
+epprd_rg:process_resources(5.528)[logredo_volume_groups:2951] [[ -z '' ]]
+epprd_rg:process_resources(5.529)[logredo_volume_groups:2958] rm -f /var/hacmp/log/.process_resources_logredo.23593416
+epprd_rg:process_resources(5.529)[logredo_volume_groups:2953] logredo /dev/epprdaloglv
+epprd_rg:process_resources(5.533)[logredo_volume_groups:2962] : Wait for the background logredos from the RGs
+epprd_rg:process_resources(5.533)[logredo_volume_groups:2964] wait
J2_LOGREDO:log redo processing for /dev/epprdaloglv
+epprd_rg:process_resources(5.565)[logredo_volume_groups:2966] return 0
+epprd_rg:process_resources(5.565)[3324] true
+epprd_rg:process_resources(5.565)[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources(5.565)[3328] set -a
+epprd_rg:process_resources(5.565)[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:39.924466 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources(5.584)[3329] eval JOB_TYPE=FILESYSTEMS ACTION=ACQUIRE FILE_SYSTEMS='"/board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap"' RESOURCE_GROUPS='"epprd_rg' '"' FSCHECK_TOOLS='"fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck"' RECOVERY_METHODS='"sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential"'
+epprd_rg:process_resources(5.584)[1] JOB_TYPE=FILESYSTEMS
+epprd_rg:process_resources(5.584)[1] ACTION=ACQUIRE
+epprd_rg:process_resources(5.584)[1] FILE_SYSTEMS=/board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap
+epprd_rg:process_resources(5.584)[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources(5.584)[1] FSCHECK_TOOLS=fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck
+epprd_rg:process_resources(5.584)[1] RECOVERY_METHODS=sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:process_resources(5.584)[3330] RC=0
+epprd_rg:process_resources(5.584)[3331] set +a
+epprd_rg:process_resources(5.584)[3333] (( 0 != 0 ))
+epprd_rg:process_resources(5.584)[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources(5.584)[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources(5.584)[3343] export GROUPNAME
+epprd_rg:process_resources(5.584)[3353] IS_SERVICE_START=1
+epprd_rg:process_resources(5.584)[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources(5.584)[3360] [[ FILESYSTEMS == RELEASE ]]
+epprd_rg:process_resources(5.584)[3360] [[ FILESYSTEMS == ONLINE ]]
+epprd_rg:process_resources(5.584)[3482] process_file_systems ACQUIRE
+epprd_rg:process_resources(5.584)[process_file_systems:2640] PS4_FUNC=process_file_systems
+epprd_rg:process_resources(5.584)[process_file_systems:2640] typeset PS4_FUNC
+epprd_rg:process_resources(5.584)[process_file_systems:2641] [[ high == high ]]
+epprd_rg:process_resources(5.584)[process_file_systems:2641] set -x
+epprd_rg:process_resources(5.584)[process_file_systems:2643] STAT=0
+epprd_rg:process_resources(5.584)[process_file_systems:2645] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources(5.584)[process_file_systems:2647] cl_activate_fs
+epprd_rg:cl_activate_fs[819] version=1.1.8.5
+epprd_rg:cl_activate_fs[823] : Check for mounting OEM file systems
+epprd_rg:cl_activate_fs[825] OEM_FS=false
+epprd_rg:cl_activate_fs[826] (( 0 != 0 ))
+epprd_rg:cl_activate_fs[832] STATUS=0
+epprd_rg:cl_activate_fs[832] typeset -li STATUS
+epprd_rg:cl_activate_fs[833] EMULATE=REAL
+epprd_rg:cl_activate_fs[836] : The environment variable MOUNT_WLMCNTRL_SELFMANAGE is referred inside mount.
+epprd_rg:cl_activate_fs[837] : If this variable is set, few calls to wlmcntrl are skipped inside mount, which
+epprd_rg:cl_activate_fs[838] : offers performance benefits. Hence we will export this variable if it is set
+epprd_rg:cl_activate_fs[839] : in /etc/environment.
+epprd_rg:cl_activate_fs[841] grep -w ^MOUNT_WLMCNTRL_SELFMANAGE /etc/environment
+epprd_rg:cl_activate_fs[841] export eval
+epprd_rg:cl_activate_fs[843] [[ -n FILESYSTEMS ]]
+epprd_rg:cl_activate_fs[843] [[ FILESYSTEMS != GROUP ]]
+epprd_rg:cl_activate_fs[846] : If JOB_TYPE is set, and it does not equal to GROUP, then
+epprd_rg:cl_activate_fs[847] : we are processing for process_resources, which passes requests
+epprd_rg:cl_activate_fs[848] : associaed with multiple resource groups through environment variables
+epprd_rg:cl_activate_fs[850] activate_fs_process_resources
+epprd_rg:cl_activate_fs[activate_fs_process_resources:716] [[ high == high ]]
+epprd_rg:cl_activate_fs[activate_fs_process_resources:716] set -x
+epprd_rg:cl_activate_fs[activate_fs_process_resources:718] ERRSTATUS=0
+epprd_rg:cl_activate_fs[activate_fs_process_resources:718] typeset -i ERRSTATUS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:719] RC=0
+epprd_rg:cl_activate_fs[activate_fs_process_resources:719] typeset -li RC
+epprd_rg:cl_activate_fs[activate_fs_process_resources:742] export GROUPNAME
+epprd_rg:cl_activate_fs[activate_fs_process_resources:745] : Get the file systems, recovery tool and procedure for this
+epprd_rg:cl_activate_fs[activate_fs_process_resources:746] : resource group
+epprd_rg:cl_activate_fs[activate_fs_process_resources:748] print /board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap
+epprd_rg:cl_activate_fs[activate_fs_process_resources:748] read _RG_FILE_SYSTEMS FILE_SYSTEMS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:748] IFS=:
+epprd_rg:cl_activate_fs[activate_fs_process_resources:749] print fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck
+epprd_rg:cl_activate_fs[activate_fs_process_resources:749] read _RG_FSCHECK_TOOLS FSCHECK_TOOLS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:749] IFS=:
+epprd_rg:cl_activate_fs[activate_fs_process_resources:750] print sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:cl_activate_fs[activate_fs_process_resources:750] read _RG_RECOVERY_METHODS RECOVERY_METHODS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:750] IFS=:
+epprd_rg:cl_activate_fs[activate_fs_process_resources:753] : Since all file systems in a resource group use the same recovery
+epprd_rg:cl_activate_fs[activate_fs_process_resources:754] : method and recovery means, just pick up the first one in the list
+epprd_rg:cl_activate_fs[activate_fs_process_resources:756] print fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck
+epprd_rg:cl_activate_fs[activate_fs_process_resources:756] read FSCHECK_TOOL rest
+epprd_rg:cl_activate_fs[activate_fs_process_resources:756] IFS=,
+epprd_rg:cl_activate_fs[activate_fs_process_resources:757] print sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:cl_activate_fs[activate_fs_process_resources:757] read RECOVERY_METHOD rest
+epprd_rg:cl_activate_fs[activate_fs_process_resources:757] IFS=,
+epprd_rg:cl_activate_fs[activate_fs_process_resources:760] : If there are any unmounted file systems for this resource group, go
+epprd_rg:cl_activate_fs[activate_fs_process_resources:761] : recover and mount them.
+epprd_rg:cl_activate_fs[activate_fs_process_resources:763] [[ -n /board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap ]]
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] IFS=,
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] set -- /board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] print /board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] RG_FILE_SYSTEMS='/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_resources:766] activate_fs_process_group sequential fsck '/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_group:362] PS4_LOOP=''
+epprd_rg:cl_activate_fs[activate_fs_process_group:362] typeset PS4_LOOP
+epprd_rg:cl_activate_fs[activate_fs_process_group:363] [[ high == high ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:363] set -x
+epprd_rg:cl_activate_fs[activate_fs_process_group:365] typeset RECOVERY_METHOD FSCHECK_TOOL FILESYSTEMS
+epprd_rg:cl_activate_fs[activate_fs_process_group:366] STATUS=0
+epprd_rg:cl_activate_fs[activate_fs_process_group:366] typeset -i STATUS
+epprd_rg:cl_activate_fs[activate_fs_process_group:368] RECOVERY_METHOD=sequential
+epprd_rg:cl_activate_fs[activate_fs_process_group:369] FSCHECK_TOOL=fsck
+epprd_rg:cl_activate_fs[activate_fs_process_group:370] shift 2
+epprd_rg:cl_activate_fs[activate_fs_process_group:371] FILESYSTEMS='/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_group:372] comm_failure=''
+epprd_rg:cl_activate_fs[activate_fs_process_group:372] typeset comm_failure
+epprd_rg:cl_activate_fs[activate_fs_process_group:373] rc_mount=''
+epprd_rg:cl_activate_fs[activate_fs_process_group:373] typeset rc_mount
+epprd_rg:cl_activate_fs[activate_fs_process_group:376] : Filter out duplicates, and file systems which are already mounted
+epprd_rg:cl_activate_fs[activate_fs_process_group:378] mounts_to_do '/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[mounts_to_do:283] tomount='/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[mounts_to_do:283] typeset tomount
+epprd_rg:cl_activate_fs[mounts_to_do:286] : Get most current list of mounted filesystems
+epprd_rg:cl_activate_fs[mounts_to_do:288] mount
+epprd_rg:cl_activate_fs[mounts_to_do:288] 2> /dev/null
+epprd_rg:cl_activate_fs[mounts_to_do:288] paste -s -
+epprd_rg:cl_activate_fs[mounts_to_do:288] awk '$3 ~ /jfs2*$/ {print $2}'
+epprd_rg:cl_activate_fs[mounts_to_do:288] mounted=$'/\t/usr\t/var\t/tmp\t/home\t/admin\t/opt\t/var/adm/ras/livedump\t/ptf'
+epprd_rg:cl_activate_fs[mounts_to_do:288] typeset mounted
+epprd_rg:cl_activate_fs[mounts_to_do:291] shift
+epprd_rg:cl_activate_fs[mounts_to_do:294] typeset -A mountedArray tomountArray
+epprd_rg:cl_activate_fs[mounts_to_do:295] typeset fs
+epprd_rg:cl_activate_fs[mounts_to_do:298] : Create an associative array for each list, which
+epprd_rg:cl_activate_fs[mounts_to_do:299] : has the side effect of dropping any duplicates
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/usr]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/var]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/tmp]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/home]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/admin]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/opt]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/var/adm/ras/livedump]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/ptf]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/board_org]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/mirrlogA]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/mirrlogB]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/oraarch]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/origlogA]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/origlogB]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata1]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata2]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata3]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata4]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/sapmnt]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/usr/sap]=1
+epprd_rg:cl_activate_fs[mounts_to_do:310] mounted=''
+epprd_rg:cl_activate_fs[mounts_to_do:311] tomount=''
+epprd_rg:cl_activate_fs[mounts_to_do:314] : expand fs from all tomountArray subscript names
+epprd_rg:cl_activate_fs[mounts_to_do:316] set +u
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:329] : Print all subscript names which are all remaining mount
+epprd_rg:cl_activate_fs[mounts_to_do:330] : points which have to be mounted
+epprd_rg:cl_activate_fs[mounts_to_do:332] print /board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap
+epprd_rg:cl_activate_fs[mounts_to_do:332] sort -u
+epprd_rg:cl_activate_fs[mounts_to_do:332] tr ' ' '\n'
+epprd_rg:cl_activate_fs[mounts_to_do:334] set -u
+epprd_rg:cl_activate_fs[activate_fs_process_group:378] FILESYSTEMS=$'/board_org\n/oracle\n/oracle/EPP\n/oracle/EPP/mirrlogA\n/oracle/EPP/mirrlogB\n/oracle/EPP/oraarch\n/oracle/EPP/origlogA\n/oracle/EPP/origlogB\n/oracle/EPP/sapdata1\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata4\n/sapmnt\n/usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_group:379] [[ -z $'/board_org\n/oracle\n/oracle/EPP\n/oracle/EPP/mirrlogA\n/oracle/EPP/mirrlogB\n/oracle/EPP/oraarch\n/oracle/EPP/origlogA\n/oracle/EPP/origlogB\n/oracle/EPP/sapdata1\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata4\n/sapmnt\n/usr/sap' ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:385] : Get unique temporary file names by using the resource group and the
+epprd_rg:cl_activate_fs[activate_fs_process_group:386] : current process ID
+epprd_rg:cl_activate_fs[activate_fs_process_group:388] [[ -z epprd_rg ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:397] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs[activate_fs_process_group:398] rm -f /tmp/epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs[activate_fs_process_group:401] : If FSCHECK_TOOL is null get from ODM
+epprd_rg:cl_activate_fs[activate_fs_process_group:403] [[ -z fsck ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:408] print fsck
+epprd_rg:cl_activate_fs[activate_fs_process_group:408] FSCHECK_TOOL=fsck
+epprd_rg:cl_activate_fs[activate_fs_process_group:409] [[ fsck != fsck ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:416] : If RECOVERY_METHOD is null get from ODM
+epprd_rg:cl_activate_fs[activate_fs_process_group:418] [[ -z sequential ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:423] print sequential
+epprd_rg:cl_activate_fs[activate_fs_process_group:423] RECOVERY_METHOD=sequential
+epprd_rg:cl_activate_fs[activate_fs_process_group:424] [[ sequential != sequential ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:431] set -u
+epprd_rg:cl_activate_fs[activate_fs_process_group:434] : If FSCHECK_TOOL is set to logredo, the logredo for each jfslog has
+epprd_rg:cl_activate_fs[activate_fs_process_group:435] : already been done in get_disk_vg_fs, so we only need to do fsck check
+epprd_rg:cl_activate_fs[activate_fs_process_group:436] : and recovery here before going on to do the mounts
+epprd_rg:cl_activate_fs[activate_fs_process_group:438] [[ fsck == fsck ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:441] TOOL='/usr/sbin/fsck -f -p -o nologredo'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:445] PS4_LOOP=/board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:446] lsfs /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:446] grep -w /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:449] : Verify if any of the file system /board_org is already mounted anywhere
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] lsfs -qc /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:463] fsdb /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/board_org\n\nFile System Size:\t\t10485032\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t16384\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000009ffd28\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t91\n[10] s_agsize:\t\t0x00004000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x0013ffa5\n \t\t s_fsckpxd.address:\t1310629\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'boardl\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000000b5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t181\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d331\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/board_org\n\nFile System Size:\t\t10485032\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t16384\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000009ffd28\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t91\n[10] s_agsize:\t\t0x00004000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x0013ffa5\n \t\t s_fsckpxd.address:\t1310629\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'boardl\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000000b5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t181\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d331\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/boardlv
The current volume is: /dev/boardlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:445] PS4_LOOP=/oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:446] lsfs /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:446] grep -w /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:449] : Verify if any of the file system /oracle is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] lsfs -qc /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:463] fsdb /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle\n\nFile System Size:\t\t41941352\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000027ff968\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t211\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x004fff2d\n \t\t s_fsckpxd.address:\t5242669\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oracle\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5819\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000295\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t661\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d581e\t[52] last unmounted:\t0x63d4d3ee\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle\n\nFile System Size:\t\t41941352\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000027ff968\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t211\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x004fff2d\n \t\t s_fsckpxd.address:\t5242669\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oracle\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5819\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000295\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t661\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d581e\t[52] last unmounted:\t0x63d4d3ee\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/oraclelv
The current volume is: /dev/oraclelv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:446] lsfs /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:446] grep -w /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] lsfs -qc /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:463] fsdb /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP\n\nFile System Size:\t\t62912232\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x0000000003bff6e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t291\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x0077fedd\n \t\t s_fsckpxd.address:\t7864029\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'epplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5824\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000003d5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t981\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5826\t[52] last unmounted:\t0x63d4d3ec\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP\n\nFile System Size:\t\t62912232\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x0000000003bff6e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t291\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x0077fedd\n \t\t s_fsckpxd.address:\t7864029\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'epplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5824\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000003d5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t981\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5826\t[52] last unmounted:\t0x63d4d3ec\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/epplv
The current volume is: /dev/epplv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:446] lsfs /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:446] grep -w /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/mirrlogA is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] lsfs -qc /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:463] fsdb /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/mirrlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5834\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/mirrlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5834\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/mirrlogAlv
The current volume is: /dev/mirrlogAlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:446] lsfs /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:446] grep -w /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/mirrlogB is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] lsfs -qc /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:463] fsdb /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/mirrlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5835\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b4\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/mirrlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5835\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b4\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/mirrlogBlv
The current volume is: /dev/mirrlogBlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:446] lsfs /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:446] grep -w /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/oraarch is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] lsfs -qc /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:463] fsdb /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/oraarch\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oraarc\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582e\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b2\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/oraarch\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oraarc\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582e\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b2\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/oraarchlv
The current volume is: /dev/oraarchlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:446] lsfs /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:446] grep -w /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/origlogA is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] lsfs -qc /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:463] fsdb /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/origlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/origlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/origlogAlv
The current volume is: /dev/origlogAlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:446] lsfs /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:446] grep -w /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/origlogB is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] lsfs -qc /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:463] fsdb /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/origlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5833\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/origlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5833\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5836\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/origlogBlv
The current volume is: /dev/origlogBlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata1 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata1\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582f\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata1\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582f\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata1lv
The current volume is: /dev/sapdata1lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata2 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata2\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5830\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata2\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5830\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata2lv
The current volume is: /dev/sapdata2lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata3 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata3\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata3\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata3lv
The current volume is: /dev/sapdata3lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata4 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata4\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata4\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d5835\t[52] last unmounted:\t0x63d4d3b3\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata4lv
The current volume is: /dev/sapdata4lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:445] PS4_LOOP=/sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:446] lsfs /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:446] grep -w /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:449] : Verify if any of the file system /sapmnt is already mounted anywhere
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] lsfs -qc /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:463] fsdb /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/sapmnt\n\nFile System Size:\t\t20970472\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t32768\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000013ffbe8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t131\n[10] s_agsize:\t\t0x00008000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x0027ff7d\n \t\t s_fsckpxd.address:\t2621309\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapmnt\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5818\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000155\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t341\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d581e\t[52] last unmounted:\t0x63d4d408\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/sapmnt\n\nFile System Size:\t\t20970472\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t32768\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000013ffbe8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t131\n[10] s_agsize:\t\t0x00008000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x0027ff7d\n \t\t s_fsckpxd.address:\t2621309\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapmnt\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5818\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000155\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t341\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d581e\t[52] last unmounted:\t0x63d4d408\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapmntlv
The current volume is: /dev/sapmntlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:445] PS4_LOOP=/usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:446] lsfs /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:446] grep -w /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:449] : Verify if any of the file system /usr/sap is already mounted anywhere
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] lsfs -qc /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] MOUNTGUARD='no)'
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:463] fsdb /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/usr/sap\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'saplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5815\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d581e\t[52] last unmounted:\t0x63d4d3c7\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/usr/sap\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000001\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x00000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'saplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5815\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x639d581e\t[52] last unmounted:\t0x63d4d3c7\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:469] [[ 'no)' == yes ]]
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/saplv
The current volume is: /dev/saplv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:513] : Allow any backgrounded fsck operations to finish
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:515] wait
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:519] : Now attempt to mount all the file systems
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:521] ALLFS=All_filesystems
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:522] cl_RMupdate resource_acquiring All_filesystems cl_activate_fs
2023-01-28T17:10:40.713652
2023-01-28T17:10:40.718011
+epprd_rg:cl_activate_fs(0.783):/usr/sap[activate_fs_process_group:524] PS4_TIMER=true
+epprd_rg:cl_activate_fs(0.783):/usr/sap[activate_fs_process_group:524] typeset PS4_TIMER
+epprd_rg:cl_activate_fs(0.783):/board_org[activate_fs_process_group:527] PS4_LOOP=/board_org
+epprd_rg:cl_activate_fs(0.783):/board_org[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(0.783):/board_org[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(0.783):/board_org[activate_fs_process_group:540] fs_mount /board_org fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:69] FS=/board_org
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:81] : Here check to see if the information in /etc/filesystems for /board_org
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:86] lsfs -c /board_org
+epprd_rg:cl_activate_fs(0.784):/board_org[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(0.789):/board_org[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(0.784):/board_org[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(0.789):/board_org[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(0.789):/board_org[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(0.790):/board_org[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(0.784):/board_org[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(0.791):/board_org[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(0.791):/board_org[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(0.791):/board_org[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(0.793):/board_org[fs_mount:100] LV_name=boardlv
+epprd_rg:cl_activate_fs(0.793):/board_org[fs_mount:101] getlvcb -T -A boardlv
+epprd_rg:cl_activate_fs(0.794):/board_org[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(0.811):/board_org[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(0.794):/board_org[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.11 \n\t lvname = boardlv \n\t label = /board_org \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:47 2022\n \t time modified = Sat Dec 17 14:48:34 2022\n '
+epprd_rg:cl_activate_fs(0.812):/board_org[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(0.812):/board_org[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(0.813):/board_org[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(0.794):/board_org[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.11 \n\t lvname = boardlv \n\t label = /board_org \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:47 2022\n \t time modified = Sat Dec 17 14:48:34 2022\n '
+epprd_rg:cl_activate_fs(0.814):/board_org[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(0.814):/board_org[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(0.815):/board_org[fs_mount:115] clodmget -q 'name = boardlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:115] CuAt_label=/board_org
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:118] : At this point, if things are working correctly, /board_org from /etc/filesystems
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:119] : should match /board_org from CuAt ODM and /board_org from the LVCB
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:123] [[ /board_org != /board_org ]]
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:128] [[ /board_org != /board_org ]]
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(0.819):/board_org[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(0.839):/board_org[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(0.839):/board_org[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(0.839):/board_org[fs_mount:160] amlog_trace '' 'Activating Filesystem|/board_org'
+epprd_rg:cl_activate_fs(0.839):/board_org[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(0.840):/board_org[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(0.864):/board_org[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(0.866):/board_org[amlog_trace:319] DATE=2023-01-28T17:10:40.802224
+epprd_rg:cl_activate_fs(0.866):/board_org[amlog_trace:320] echo '|2023-01-28T17:10:40.802224|INFO: Activating Filesystem|/board_org'
+epprd_rg:cl_activate_fs(0.866):/board_org[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(0.867):/board_org[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(0.869):/board_org[fs_mount:162] : Try to mount filesystem /board_org at Jan 28 17:10:40.000
+epprd_rg:cl_activate_fs(0.869):/board_org[fs_mount:163] mount /board_org
+epprd_rg:cl_activate_fs(0.881):/board_org[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(0.881):/board_org[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(0.881):/board_org[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(0.881):/board_org[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/board_org'
+epprd_rg:cl_activate_fs(0.881):/board_org[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(0.882):/board_org[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(0.906):/board_org[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(0.909):/board_org[amlog_trace:319] DATE=2023-01-28T17:10:40.844938
+epprd_rg:cl_activate_fs(0.909):/board_org[amlog_trace:320] echo '|2023-01-28T17:10:40.844938|INFO: Activating Filesystems completed|/board_org'
+epprd_rg:cl_activate_fs(0.909):/board_org[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(0.909):/board_org[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(0.910):/board_org[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(0.911):/board_org[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(0.794):/board_org[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.11 \n\t lvname = boardlv \n\t label = /board_org \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:47 2022\n \t time modified = Sat Dec 17 14:48:34 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(0.913):/board_org[fs_mount:249] chfs -a mountguard=yes /board_org
+epprd_rg:cl_activate_fs(0.914):/board_org[fs_mount:249] CLUSTER_OVERRIDE=yes
/board_org is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(1.071):/board_org[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.071):/oracle[activate_fs_process_group:527] PS4_LOOP=/oracle
+epprd_rg:cl_activate_fs(1.071):/oracle[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.071):/oracle[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.071):/oracle[activate_fs_process_group:540] fs_mount /oracle fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:69] FS=/oracle
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.071):/oracle[fs_mount:86] lsfs -c /oracle
+epprd_rg:cl_activate_fs(1.072):/oracle[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.077):/oracle[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.072):/oracle[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle:/dev/oraclelv:jfs2:::41943040:rw:no:no'
+epprd_rg:cl_activate_fs(1.077):/oracle[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.077):/oracle[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.078):/oracle[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.072):/oracle[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle:/dev/oraclelv:jfs2:::41943040:rw:no:no'
+epprd_rg:cl_activate_fs(1.079):/oracle[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.080):/oracle[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.080):/oracle[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.081):/oracle[fs_mount:100] LV_name=oraclelv
+epprd_rg:cl_activate_fs(1.081):/oracle[fs_mount:101] getlvcb -T -A oraclelv
+epprd_rg:cl_activate_fs(1.082):/oracle[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.100):/oracle[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.082):/oracle[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.4 \n\t lvname = oraclelv \n\t label = /oracle \n\t machine id = 44AF14B00 \n\t number lps = 40 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:42 2022\n \t time modified = Sat Dec 17 14:48:09 2022\n '
+epprd_rg:cl_activate_fs(1.100):/oracle[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.100):/oracle[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.101):/oracle[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.082):/oracle[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.4 \n\t lvname = oraclelv \n\t label = /oracle \n\t machine id = 44AF14B00 \n\t number lps = 40 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:42 2022\n \t time modified = Sat Dec 17 14:48:09 2022\n '
+epprd_rg:cl_activate_fs(1.102):/oracle[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.103):/oracle[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.104):/oracle[fs_mount:115] clodmget -q 'name = oraclelv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:115] CuAt_label=/oracle
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:118] : At this point, if things are working correctly, /oracle from /etc/filesystems
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:119] : should match /oracle from CuAt ODM and /oracle from the LVCB
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:123] [[ /oracle != /oracle ]]
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:128] [[ /oracle != /oracle ]]
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.108):/oracle[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.128):/oracle[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.128):/oracle[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.128):/oracle[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle'
+epprd_rg:cl_activate_fs(1.128):/oracle[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.129):/oracle[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.152):/oracle[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.155):/oracle[amlog_trace:319] DATE=2023-01-28T17:10:41.090991
+epprd_rg:cl_activate_fs(1.155):/oracle[amlog_trace:320] echo '|2023-01-28T17:10:41.090991|INFO: Activating Filesystem|/oracle'
+epprd_rg:cl_activate_fs(1.155):/oracle[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.155):/oracle[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.158):/oracle[fs_mount:162] : Try to mount filesystem /oracle at Jan 28 17:10:41.000
+epprd_rg:cl_activate_fs(1.158):/oracle[fs_mount:163] mount /oracle
+epprd_rg:cl_activate_fs(1.169):/oracle[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.169):/oracle[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.169):/oracle[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.169):/oracle[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle'
+epprd_rg:cl_activate_fs(1.169):/oracle[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.170):/oracle[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.194):/oracle[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.197):/oracle[amlog_trace:319] DATE=2023-01-28T17:10:41.132698
+epprd_rg:cl_activate_fs(1.197):/oracle[amlog_trace:320] echo '|2023-01-28T17:10:41.132698|INFO: Activating Filesystems completed|/oracle'
+epprd_rg:cl_activate_fs(1.197):/oracle[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.197):/oracle[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.198):/oracle[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.199):/oracle[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.082):/oracle[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.4 \n\t lvname = oraclelv \n\t label = /oracle \n\t machine id = 44AF14B00 \n\t number lps = 40 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:42 2022\n \t time modified = Sat Dec 17 14:48:09 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.201):/oracle[fs_mount:249] chfs -a mountguard=yes /oracle
+epprd_rg:cl_activate_fs(1.202):/oracle[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(1.355):/oracle[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[activate_fs_process_group:540] fs_mount /oracle/EPP fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:69] FS=/oracle/EPP
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.355):/oracle/EPP[fs_mount:86] lsfs -c /oracle/EPP
+epprd_rg:cl_activate_fs(1.356):/oracle/EPP[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.361):/oracle/EPP[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.356):/oracle/EPP[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP:/dev/epplv:jfs2:::62914560:rw:no:no'
+epprd_rg:cl_activate_fs(1.361):/oracle/EPP[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.361):/oracle/EPP[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.362):/oracle/EPP[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.356):/oracle/EPP[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP:/dev/epplv:jfs2:::62914560:rw:no:no'
+epprd_rg:cl_activate_fs(1.363):/oracle/EPP[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.364):/oracle/EPP[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.364):/oracle/EPP[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.365):/oracle/EPP[fs_mount:100] LV_name=epplv
+epprd_rg:cl_activate_fs(1.365):/oracle/EPP[fs_mount:101] getlvcb -T -A epplv
+epprd_rg:cl_activate_fs(1.366):/oracle/EPP[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.384):/oracle/EPP[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.366):/oracle/EPP[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.5 \n\t lvname = epplv \n\t label = /oracle/EPP \n\t machine id = 44AF14B00 \n\t number lps = 60 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Dec 17 14:48:21 2022\n '
+epprd_rg:cl_activate_fs(1.384):/oracle/EPP[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.384):/oracle/EPP[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.385):/oracle/EPP[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.366):/oracle/EPP[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.5 \n\t lvname = epplv \n\t label = /oracle/EPP \n\t machine id = 44AF14B00 \n\t number lps = 60 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Dec 17 14:48:21 2022\n '
+epprd_rg:cl_activate_fs(1.386):/oracle/EPP[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.386):/oracle/EPP[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.388):/oracle/EPP[fs_mount:115] clodmget -q 'name = epplv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:115] CuAt_label=/oracle/EPP
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP from /etc/filesystems
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:119] : should match /oracle/EPP from CuAt ODM and /oracle/EPP from the LVCB
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:123] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:128] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.411):/oracle/EPP[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.411):/oracle/EPP[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.411):/oracle/EPP[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.411):/oracle/EPP[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.412):/oracle/EPP[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.436):/oracle/EPP[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.439):/oracle/EPP[amlog_trace:319] DATE=2023-01-28T17:10:41.374882
+epprd_rg:cl_activate_fs(1.439):/oracle/EPP[amlog_trace:320] echo '|2023-01-28T17:10:41.374882|INFO: Activating Filesystem|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.439):/oracle/EPP[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.439):/oracle/EPP[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.442):/oracle/EPP[fs_mount:162] : Try to mount filesystem /oracle/EPP at Jan 28 17:10:41.000
+epprd_rg:cl_activate_fs(1.442):/oracle/EPP[fs_mount:163] mount /oracle/EPP
+epprd_rg:cl_activate_fs(1.468):/oracle/EPP[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.468):/oracle/EPP[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.468):/oracle/EPP[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.468):/oracle/EPP[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.468):/oracle/EPP[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.469):/oracle/EPP[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.493):/oracle/EPP[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[amlog_trace:319] DATE=2023-01-28T17:10:41.431447
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[amlog_trace:320] echo '|2023-01-28T17:10:41.431447|INFO: Activating Filesystems completed|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.496):/oracle/EPP[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.497):/oracle/EPP[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.498):/oracle/EPP[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.499):/oracle/EPP[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.499):/oracle/EPP[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.500):/oracle/EPP[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.500):/oracle/EPP[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.500):/oracle/EPP[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.500):/oracle/EPP[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.500):/oracle/EPP[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.500):/oracle/EPP[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.366):/oracle/EPP[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.5 \n\t lvname = epplv \n\t label = /oracle/EPP \n\t machine id = 44AF14B00 \n\t number lps = 60 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Dec 17 14:48:21 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.500):/oracle/EPP[fs_mount:249] chfs -a mountguard=yes /oracle/EPP
+epprd_rg:cl_activate_fs(1.501):/oracle/EPP[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[activate_fs_process_group:540] fs_mount /oracle/EPP/mirrlogA fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:69] FS=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.654):/oracle/EPP/mirrlogA[fs_mount:86] lsfs -c /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.655):/oracle/EPP/mirrlogA[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.660):/oracle/EPP/mirrlogA[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.656):/oracle/EPP/mirrlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogA:/dev/mirrlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.660):/oracle/EPP/mirrlogA[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.660):/oracle/EPP/mirrlogA[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.661):/oracle/EPP/mirrlogA[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.656):/oracle/EPP/mirrlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogA:/dev/mirrlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.662):/oracle/EPP/mirrlogA[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.663):/oracle/EPP/mirrlogA[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.663):/oracle/EPP/mirrlogA[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.664):/oracle/EPP/mirrlogA[fs_mount:100] LV_name=mirrlogAlv
+epprd_rg:cl_activate_fs(1.664):/oracle/EPP/mirrlogA[fs_mount:101] getlvcb -T -A mirrlogAlv
+epprd_rg:cl_activate_fs(1.665):/oracle/EPP/mirrlogA[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/mirrlogA[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.665):/oracle/EPP/mirrlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.14 \n\t lvname = mirrlogAlv \n\t label = /oracle/EPP/mirrlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Dec 17 14:48:36 2022\n '
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/mirrlogA[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/mirrlogA[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.683):/oracle/EPP/mirrlogA[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.665):/oracle/EPP/mirrlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.14 \n\t lvname = mirrlogAlv \n\t label = /oracle/EPP/mirrlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Dec 17 14:48:36 2022\n '
+epprd_rg:cl_activate_fs(1.684):/oracle/EPP/mirrlogA[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.685):/oracle/EPP/mirrlogA[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.686):/oracle/EPP/mirrlogA[fs_mount:115] clodmget -q 'name = mirrlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:115] CuAt_label=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/mirrlogA from /etc/filesystems
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:119] : should match /oracle/EPP/mirrlogA from CuAt ODM and /oracle/EPP/mirrlogA from the LVCB
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:123] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:128] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/mirrlogA[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.710):/oracle/EPP/mirrlogA[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.710):/oracle/EPP/mirrlogA[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.710):/oracle/EPP/mirrlogA[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.710):/oracle/EPP/mirrlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.711):/oracle/EPP/mirrlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.735):/oracle/EPP/mirrlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.737):/oracle/EPP/mirrlogA[amlog_trace:319] DATE=2023-01-28T17:10:41.673156
+epprd_rg:cl_activate_fs(1.737):/oracle/EPP/mirrlogA[amlog_trace:320] echo '|2023-01-28T17:10:41.673156|INFO: Activating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.737):/oracle/EPP/mirrlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.737):/oracle/EPP/mirrlogA[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.740):/oracle/EPP/mirrlogA[fs_mount:162] : Try to mount filesystem /oracle/EPP/mirrlogA at Jan 28 17:10:41.000
+epprd_rg:cl_activate_fs(1.740):/oracle/EPP/mirrlogA[fs_mount:163] mount /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.752):/oracle/EPP/mirrlogA[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.752):/oracle/EPP/mirrlogA[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.752):/oracle/EPP/mirrlogA[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.752):/oracle/EPP/mirrlogA[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.752):/oracle/EPP/mirrlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.753):/oracle/EPP/mirrlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.777):/oracle/EPP/mirrlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.779):/oracle/EPP/mirrlogA[amlog_trace:319] DATE=2023-01-28T17:10:41.715361
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[amlog_trace:320] echo '|2023-01-28T17:10:41.715361|INFO: Activating Filesystems completed|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/mirrlogA[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.781):/oracle/EPP/mirrlogA[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.781):/oracle/EPP/mirrlogA[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.783):/oracle/EPP/mirrlogA[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.783):/oracle/EPP/mirrlogA[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.784):/oracle/EPP/mirrlogA[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.784):/oracle/EPP/mirrlogA[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.784):/oracle/EPP/mirrlogA[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.784):/oracle/EPP/mirrlogA[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.784):/oracle/EPP/mirrlogA[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.784):/oracle/EPP/mirrlogA[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.665):/oracle/EPP/mirrlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.14 \n\t lvname = mirrlogAlv \n\t label = /oracle/EPP/mirrlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Dec 17 14:48:36 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.784):/oracle/EPP/mirrlogA[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.785):/oracle/EPP/mirrlogA[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/mirrlogA is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogA[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[activate_fs_process_group:540] fs_mount /oracle/EPP/mirrlogB fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:69] FS=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.939):/oracle/EPP/mirrlogB[fs_mount:86] lsfs -c /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.940):/oracle/EPP/mirrlogB[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.945):/oracle/EPP/mirrlogB[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.940):/oracle/EPP/mirrlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogB:/dev/mirrlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.945):/oracle/EPP/mirrlogB[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.945):/oracle/EPP/mirrlogB[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.946):/oracle/EPP/mirrlogB[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.940):/oracle/EPP/mirrlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogB:/dev/mirrlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.947):/oracle/EPP/mirrlogB[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.948):/oracle/EPP/mirrlogB[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.948):/oracle/EPP/mirrlogB[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.949):/oracle/EPP/mirrlogB[fs_mount:100] LV_name=mirrlogBlv
+epprd_rg:cl_activate_fs(1.949):/oracle/EPP/mirrlogB[fs_mount:101] getlvcb -T -A mirrlogBlv
+epprd_rg:cl_activate_fs(1.950):/oracle/EPP/mirrlogB[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.967):/oracle/EPP/mirrlogB[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.950):/oracle/EPP/mirrlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.15 \n\t lvname = mirrlogBlv \n\t label = /oracle/EPP/mirrlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:50 2022\n \t time modified = Sat Dec 17 14:48:37 2022\n '
+epprd_rg:cl_activate_fs(1.968):/oracle/EPP/mirrlogB[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.968):/oracle/EPP/mirrlogB[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.968):/oracle/EPP/mirrlogB[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.950):/oracle/EPP/mirrlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.15 \n\t lvname = mirrlogBlv \n\t label = /oracle/EPP/mirrlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:50 2022\n \t time modified = Sat Dec 17 14:48:37 2022\n '
+epprd_rg:cl_activate_fs(1.970):/oracle/EPP/mirrlogB[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.970):/oracle/EPP/mirrlogB[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.972):/oracle/EPP/mirrlogB[fs_mount:115] clodmget -q 'name = mirrlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:115] CuAt_label=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/mirrlogB from /etc/filesystems
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:119] : should match /oracle/EPP/mirrlogB from CuAt ODM and /oracle/EPP/mirrlogB from the LVCB
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:123] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:128] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.975):/oracle/EPP/mirrlogB[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.995):/oracle/EPP/mirrlogB[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.995):/oracle/EPP/mirrlogB[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.995):/oracle/EPP/mirrlogB[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(1.995):/oracle/EPP/mirrlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.996):/oracle/EPP/mirrlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.020):/oracle/EPP/mirrlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.023):/oracle/EPP/mirrlogB[amlog_trace:319] DATE=2023-01-28T17:10:41.958692
+epprd_rg:cl_activate_fs(2.023):/oracle/EPP/mirrlogB[amlog_trace:320] echo '|2023-01-28T17:10:41.958692|INFO: Activating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(2.023):/oracle/EPP/mirrlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.023):/oracle/EPP/mirrlogB[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(2.026):/oracle/EPP/mirrlogB[fs_mount:162] : Try to mount filesystem /oracle/EPP/mirrlogB at Jan 28 17:10:41.000
+epprd_rg:cl_activate_fs(2.026):/oracle/EPP/mirrlogB[fs_mount:163] mount /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(2.037):/oracle/EPP/mirrlogB[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(2.037):/oracle/EPP/mirrlogB[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(2.037):/oracle/EPP/mirrlogB[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(2.037):/oracle/EPP/mirrlogB[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(2.037):/oracle/EPP/mirrlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(2.038):/oracle/EPP/mirrlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.062):/oracle/EPP/mirrlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[amlog_trace:319] DATE=2023-01-28T17:10:42.000582
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[amlog_trace:320] echo '|2023-01-28T17:10:42.000582|INFO: Activating Filesystems completed|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(2.065):/oracle/EPP/mirrlogB[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(2.066):/oracle/EPP/mirrlogB[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(2.067):/oracle/EPP/mirrlogB[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.950):/oracle/EPP/mirrlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.15 \n\t lvname = mirrlogBlv \n\t label = /oracle/EPP/mirrlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:50 2022\n \t time modified = Sat Dec 17 14:48:37 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(2.069):/oracle/EPP/mirrlogB[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(2.070):/oracle/EPP/mirrlogB[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/mirrlogB is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/mirrlogB[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[activate_fs_process_group:540] fs_mount /oracle/EPP/oraarch fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:69] FS=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(2.227):/oracle/EPP/oraarch[fs_mount:86] lsfs -c /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(2.228):/oracle/EPP/oraarch[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(2.233):/oracle/EPP/oraarch[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(2.228):/oracle/EPP/oraarch[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/oraarch:/dev/oraarchlv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(2.233):/oracle/EPP/oraarch[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(2.233):/oracle/EPP/oraarch[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(2.234):/oracle/EPP/oraarch[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(2.228):/oracle/EPP/oraarch[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/oraarch:/dev/oraarchlv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(2.235):/oracle/EPP/oraarch[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(2.236):/oracle/EPP/oraarch[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(2.236):/oracle/EPP/oraarch[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(2.237):/oracle/EPP/oraarch[fs_mount:100] LV_name=oraarchlv
+epprd_rg:cl_activate_fs(2.237):/oracle/EPP/oraarch[fs_mount:101] getlvcb -T -A oraarchlv
+epprd_rg:cl_activate_fs(2.238):/oracle/EPP/oraarch[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(2.255):/oracle/EPP/oraarch[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(2.238):/oracle/EPP/oraarch[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.6 \n\t lvname = oraarchlv \n\t label = /oracle/EPP/oraarch \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Dec 17 14:48:30 2022\n '
+epprd_rg:cl_activate_fs(2.255):/oracle/EPP/oraarch[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(2.255):/oracle/EPP/oraarch[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(2.256):/oracle/EPP/oraarch[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(2.238):/oracle/EPP/oraarch[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.6 \n\t lvname = oraarchlv \n\t label = /oracle/EPP/oraarch \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Dec 17 14:48:30 2022\n '
+epprd_rg:cl_activate_fs(2.257):/oracle/EPP/oraarch[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(2.258):/oracle/EPP/oraarch[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(2.259):/oracle/EPP/oraarch[fs_mount:115] clodmget -q 'name = oraarchlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:115] CuAt_label=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/oraarch from /etc/filesystems
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:119] : should match /oracle/EPP/oraarch from CuAt ODM and /oracle/EPP/oraarch from the LVCB
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:123] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:128] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(2.263):/oracle/EPP/oraarch[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(2.283):/oracle/EPP/oraarch[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(2.283):/oracle/EPP/oraarch[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(2.283):/oracle/EPP/oraarch[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(2.283):/oracle/EPP/oraarch[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(2.284):/oracle/EPP/oraarch[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.308):/oracle/EPP/oraarch[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.311):/oracle/EPP/oraarch[amlog_trace:319] DATE=2023-01-28T17:10:42.246842
+epprd_rg:cl_activate_fs(2.311):/oracle/EPP/oraarch[amlog_trace:320] echo '|2023-01-28T17:10:42.246842|INFO: Activating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(2.311):/oracle/EPP/oraarch[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.311):/oracle/EPP/oraarch[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(2.314):/oracle/EPP/oraarch[fs_mount:162] : Try to mount filesystem /oracle/EPP/oraarch at Jan 28 17:10:42.000
+epprd_rg:cl_activate_fs(2.314):/oracle/EPP/oraarch[fs_mount:163] mount /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(2.325):/oracle/EPP/oraarch[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(2.325):/oracle/EPP/oraarch[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(2.325):/oracle/EPP/oraarch[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(2.325):/oracle/EPP/oraarch[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(2.325):/oracle/EPP/oraarch[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(2.326):/oracle/EPP/oraarch[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.351):/oracle/EPP/oraarch[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.353):/oracle/EPP/oraarch[amlog_trace:319] DATE=2023-01-28T17:10:42.289280
+epprd_rg:cl_activate_fs(2.353):/oracle/EPP/oraarch[amlog_trace:320] echo '|2023-01-28T17:10:42.289280|INFO: Activating Filesystems completed|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(2.353):/oracle/EPP/oraarch[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(2.354):/oracle/EPP/oraarch[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(2.355):/oracle/EPP/oraarch[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(2.356):/oracle/EPP/oraarch[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(2.357):/oracle/EPP/oraarch[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(2.357):/oracle/EPP/oraarch[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(2.357):/oracle/EPP/oraarch[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(2.357):/oracle/EPP/oraarch[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(2.358):/oracle/EPP/oraarch[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(2.358):/oracle/EPP/oraarch[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(2.358):/oracle/EPP/oraarch[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(2.358):/oracle/EPP/oraarch[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(2.238):/oracle/EPP/oraarch[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.6 \n\t lvname = oraarchlv \n\t label = /oracle/EPP/oraarch \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Dec 17 14:48:30 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(2.358):/oracle/EPP/oraarch[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(2.359):/oracle/EPP/oraarch[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/oraarch is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/oraarch[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[activate_fs_process_group:540] fs_mount /oracle/EPP/origlogA fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:69] FS=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(2.506):/oracle/EPP/origlogA[fs_mount:86] lsfs -c /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(2.507):/oracle/EPP/origlogA[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(2.511):/oracle/EPP/origlogA[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(2.507):/oracle/EPP/origlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogA:/dev/origlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(2.511):/oracle/EPP/origlogA[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(2.512):/oracle/EPP/origlogA[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(2.512):/oracle/EPP/origlogA[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(2.507):/oracle/EPP/origlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogA:/dev/origlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(2.514):/oracle/EPP/origlogA[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(2.516):/oracle/EPP/origlogA[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(2.516):/oracle/EPP/origlogA[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(2.516):/oracle/EPP/origlogA[fs_mount:100] LV_name=origlogAlv
+epprd_rg:cl_activate_fs(2.516):/oracle/EPP/origlogA[fs_mount:101] getlvcb -T -A origlogAlv
+epprd_rg:cl_activate_fs(2.517):/oracle/EPP/origlogA[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(2.533):/oracle/EPP/origlogA[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(2.517):/oracle/EPP/origlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.12 \n\t lvname = origlogAlv \n\t label = /oracle/EPP/origlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:48 2022\n \t time modified = Sat Dec 17 14:48:35 2022\n '
+epprd_rg:cl_activate_fs(2.533):/oracle/EPP/origlogA[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(2.533):/oracle/EPP/origlogA[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(2.534):/oracle/EPP/origlogA[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(2.517):/oracle/EPP/origlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.12 \n\t lvname = origlogAlv \n\t label = /oracle/EPP/origlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:48 2022\n \t time modified = Sat Dec 17 14:48:35 2022\n '
+epprd_rg:cl_activate_fs(2.536):/oracle/EPP/origlogA[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(2.538):/oracle/EPP/origlogA[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(2.538):/oracle/EPP/origlogA[fs_mount:115] clodmget -q 'name = origlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:115] CuAt_label=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/origlogA from /etc/filesystems
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:119] : should match /oracle/EPP/origlogA from CuAt ODM and /oracle/EPP/origlogA from the LVCB
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:123] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:128] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(2.541):/oracle/EPP/origlogA[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(2.561):/oracle/EPP/origlogA[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(2.561):/oracle/EPP/origlogA[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(2.561):/oracle/EPP/origlogA[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(2.561):/oracle/EPP/origlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(2.562):/oracle/EPP/origlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.590):/oracle/EPP/origlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.593):/oracle/EPP/origlogA[amlog_trace:319] DATE=2023-01-28T17:10:42.528562
+epprd_rg:cl_activate_fs(2.593):/oracle/EPP/origlogA[amlog_trace:320] echo '|2023-01-28T17:10:42.528562|INFO: Activating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(2.593):/oracle/EPP/origlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.593):/oracle/EPP/origlogA[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(2.595):/oracle/EPP/origlogA[fs_mount:162] : Try to mount filesystem /oracle/EPP/origlogA at Jan 28 17:10:42.000
+epprd_rg:cl_activate_fs(2.596):/oracle/EPP/origlogA[fs_mount:163] mount /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(2.606):/oracle/EPP/origlogA[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(2.606):/oracle/EPP/origlogA[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(2.606):/oracle/EPP/origlogA[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(2.606):/oracle/EPP/origlogA[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(2.606):/oracle/EPP/origlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(2.607):/oracle/EPP/origlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.633):/oracle/EPP/origlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[amlog_trace:319] DATE=2023-01-28T17:10:42.571054
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[amlog_trace:320] echo '|2023-01-28T17:10:42.571054|INFO: Activating Filesystems completed|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(2.635):/oracle/EPP/origlogA[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(2.636):/oracle/EPP/origlogA[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(2.636):/oracle/EPP/origlogA[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(2.636):/oracle/EPP/origlogA[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(2.636):/oracle/EPP/origlogA[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(2.640):/oracle/EPP/origlogA[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(2.641):/oracle/EPP/origlogA[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(2.641):/oracle/EPP/origlogA[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(2.642):/oracle/EPP/origlogA[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(2.642):/oracle/EPP/origlogA[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(2.642):/oracle/EPP/origlogA[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(2.642):/oracle/EPP/origlogA[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(2.642):/oracle/EPP/origlogA[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(2.642):/oracle/EPP/origlogA[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(2.517):/oracle/EPP/origlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.12 \n\t lvname = origlogAlv \n\t label = /oracle/EPP/origlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:48 2022\n \t time modified = Sat Dec 17 14:48:35 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(2.642):/oracle/EPP/origlogA[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(2.643):/oracle/EPP/origlogA[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/origlogA is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogA[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[activate_fs_process_group:540] fs_mount /oracle/EPP/origlogB fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:69] FS=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(2.789):/oracle/EPP/origlogB[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(2.790):/oracle/EPP/origlogB[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(2.790):/oracle/EPP/origlogB[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(2.790):/oracle/EPP/origlogB[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(2.790):/oracle/EPP/origlogB[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(2.790):/oracle/EPP/origlogB[fs_mount:86] lsfs -c /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(2.790):/oracle/EPP/origlogB[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(2.795):/oracle/EPP/origlogB[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(2.791):/oracle/EPP/origlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogB:/dev/origlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(2.795):/oracle/EPP/origlogB[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(2.795):/oracle/EPP/origlogB[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(2.796):/oracle/EPP/origlogB[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(2.791):/oracle/EPP/origlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogB:/dev/origlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(2.798):/oracle/EPP/origlogB[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(2.800):/oracle/EPP/origlogB[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(2.800):/oracle/EPP/origlogB[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(2.800):/oracle/EPP/origlogB[fs_mount:100] LV_name=origlogBlv
+epprd_rg:cl_activate_fs(2.801):/oracle/EPP/origlogB[fs_mount:101] getlvcb -T -A origlogBlv
+epprd_rg:cl_activate_fs(2.801):/oracle/EPP/origlogB[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(2.818):/oracle/EPP/origlogB[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(2.802):/oracle/EPP/origlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.13 \n\t lvname = origlogBlv \n\t label = /oracle/EPP/origlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Dec 17 14:48:35 2022\n '
+epprd_rg:cl_activate_fs(2.818):/oracle/EPP/origlogB[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(2.818):/oracle/EPP/origlogB[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(2.819):/oracle/EPP/origlogB[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(2.802):/oracle/EPP/origlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.13 \n\t lvname = origlogBlv \n\t label = /oracle/EPP/origlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Dec 17 14:48:35 2022\n '
+epprd_rg:cl_activate_fs(2.821):/oracle/EPP/origlogB[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(2.823):/oracle/EPP/origlogB[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(2.823):/oracle/EPP/origlogB[fs_mount:115] clodmget -q 'name = origlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(2.826):/oracle/EPP/origlogB[fs_mount:115] CuAt_label=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(2.826):/oracle/EPP/origlogB[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/origlogB from /etc/filesystems
+epprd_rg:cl_activate_fs(2.826):/oracle/EPP/origlogB[fs_mount:119] : should match /oracle/EPP/origlogB from CuAt ODM and /oracle/EPP/origlogB from the LVCB
+epprd_rg:cl_activate_fs(2.826):/oracle/EPP/origlogB[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(2.826):/oracle/EPP/origlogB[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(2.826):/oracle/EPP/origlogB[fs_mount:123] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:cl_activate_fs(2.827):/oracle/EPP/origlogB[fs_mount:128] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:cl_activate_fs(2.827):/oracle/EPP/origlogB[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(2.827):/oracle/EPP/origlogB[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(2.827):/oracle/EPP/origlogB[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(2.846):/oracle/EPP/origlogB[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(2.846):/oracle/EPP/origlogB[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(2.846):/oracle/EPP/origlogB[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(2.846):/oracle/EPP/origlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(2.847):/oracle/EPP/origlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.872):/oracle/EPP/origlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.875):/oracle/EPP/origlogB[amlog_trace:319] DATE=2023-01-28T17:10:42.810288
+epprd_rg:cl_activate_fs(2.875):/oracle/EPP/origlogB[amlog_trace:320] echo '|2023-01-28T17:10:42.810288|INFO: Activating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(2.875):/oracle/EPP/origlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.875):/oracle/EPP/origlogB[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(2.877):/oracle/EPP/origlogB[fs_mount:162] : Try to mount filesystem /oracle/EPP/origlogB at Jan 28 17:10:42.000
+epprd_rg:cl_activate_fs(2.877):/oracle/EPP/origlogB[fs_mount:163] mount /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(2.888):/oracle/EPP/origlogB[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(2.888):/oracle/EPP/origlogB[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(2.888):/oracle/EPP/origlogB[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(2.888):/oracle/EPP/origlogB[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(2.888):/oracle/EPP/origlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(2.889):/oracle/EPP/origlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(2.914):/oracle/EPP/origlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[amlog_trace:319] DATE=2023-01-28T17:10:42.852850
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[amlog_trace:320] echo '|2023-01-28T17:10:42.852850|INFO: Activating Filesystems completed|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(2.917):/oracle/EPP/origlogB[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(2.918):/oracle/EPP/origlogB[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(2.921):/oracle/EPP/origlogB[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(2.802):/oracle/EPP/origlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.13 \n\t lvname = origlogBlv \n\t label = /oracle/EPP/origlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Dec 17 14:48:35 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(2.923):/oracle/EPP/origlogB[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(2.924):/oracle/EPP/origlogB[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/origlogB is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/origlogB[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/sapdata1[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/sapdata1[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/sapdata1[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/sapdata1[activate_fs_process_group:540] fs_mount /oracle/EPP/sapdata1 fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/sapdata1[fs_mount:69] FS=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/sapdata1[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(3.071):/oracle/EPP/sapdata1[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(3.072):/oracle/EPP/sapdata1[fs_mount:86] lsfs -c /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(3.073):/oracle/EPP/sapdata1[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(3.077):/oracle/EPP/sapdata1[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(3.073):/oracle/EPP/sapdata1[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata1:/dev/sapdata1lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.077):/oracle/EPP/sapdata1[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(3.077):/oracle/EPP/sapdata1[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.078):/oracle/EPP/sapdata1[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(3.073):/oracle/EPP/sapdata1[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata1:/dev/sapdata1lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.080):/oracle/EPP/sapdata1[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(3.082):/oracle/EPP/sapdata1[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(3.082):/oracle/EPP/sapdata1[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(3.082):/oracle/EPP/sapdata1[fs_mount:100] LV_name=sapdata1lv
+epprd_rg:cl_activate_fs(3.082):/oracle/EPP/sapdata1[fs_mount:101] getlvcb -T -A sapdata1lv
+epprd_rg:cl_activate_fs(3.083):/oracle/EPP/sapdata1[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(3.100):/oracle/EPP/sapdata1[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(3.083):/oracle/EPP/sapdata1[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.7 \n\t lvname = sapdata1lv \n\t label = /oracle/EPP/sapdata1 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:44 2022\n \t time modified = Sat Dec 17 14:48:31 2022\n '
+epprd_rg:cl_activate_fs(3.100):/oracle/EPP/sapdata1[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(3.100):/oracle/EPP/sapdata1[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.101):/oracle/EPP/sapdata1[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(3.083):/oracle/EPP/sapdata1[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.7 \n\t lvname = sapdata1lv \n\t label = /oracle/EPP/sapdata1 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:44 2022\n \t time modified = Sat Dec 17 14:48:31 2022\n '
+epprd_rg:cl_activate_fs(3.103):/oracle/EPP/sapdata1[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(3.105):/oracle/EPP/sapdata1[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(3.105):/oracle/EPP/sapdata1[fs_mount:115] clodmget -q 'name = sapdata1lv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:115] CuAt_label=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/sapdata1 from /etc/filesystems
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:119] : should match /oracle/EPP/sapdata1 from CuAt ODM and /oracle/EPP/sapdata1 from the LVCB
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:123] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:128] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(3.108):/oracle/EPP/sapdata1[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(3.128):/oracle/EPP/sapdata1[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.128):/oracle/EPP/sapdata1[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(3.128):/oracle/EPP/sapdata1[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_activate_fs(3.128):/oracle/EPP/sapdata1[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(3.129):/oracle/EPP/sapdata1[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(3.154):/oracle/EPP/sapdata1[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(3.156):/oracle/EPP/sapdata1[amlog_trace:319] DATE=2023-01-28T17:10:43.092235
+epprd_rg:cl_activate_fs(3.157):/oracle/EPP/sapdata1[amlog_trace:320] echo '|2023-01-28T17:10:43.092235|INFO: Activating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_activate_fs(3.157):/oracle/EPP/sapdata1[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(3.157):/oracle/EPP/sapdata1[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(3.159):/oracle/EPP/sapdata1[fs_mount:162] : Try to mount filesystem /oracle/EPP/sapdata1 at Jan 28 17:10:43.000
+epprd_rg:cl_activate_fs(3.159):/oracle/EPP/sapdata1[fs_mount:163] mount /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(3.170):/oracle/EPP/sapdata1[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(3.170):/oracle/EPP/sapdata1[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(3.170):/oracle/EPP/sapdata1[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(3.171):/oracle/EPP/sapdata1[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/sapdata1'
+epprd_rg:cl_activate_fs(3.171):/oracle/EPP/sapdata1[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(3.171):/oracle/EPP/sapdata1[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(3.197):/oracle/EPP/sapdata1[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[amlog_trace:319] DATE=2023-01-28T17:10:43.135400
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[amlog_trace:320] echo '|2023-01-28T17:10:43.135400|INFO: Activating Filesystems completed|/oracle/EPP/sapdata1'
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(3.200):/oracle/EPP/sapdata1[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(3.201):/oracle/EPP/sapdata1[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(3.204):/oracle/EPP/sapdata1[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(3.083):/oracle/EPP/sapdata1[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.7 \n\t lvname = sapdata1lv \n\t label = /oracle/EPP/sapdata1 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:44 2022\n \t time modified = Sat Dec 17 14:48:31 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(3.206):/oracle/EPP/sapdata1[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(3.207):/oracle/EPP/sapdata1[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/sapdata1 is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata1[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[activate_fs_process_group:540] fs_mount /oracle/EPP/sapdata2 fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:69] FS=/oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(3.354):/oracle/EPP/sapdata2[fs_mount:86] lsfs -c /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs(3.355):/oracle/EPP/sapdata2[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(3.360):/oracle/EPP/sapdata2[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(3.355):/oracle/EPP/sapdata2[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata2:/dev/sapdata2lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.360):/oracle/EPP/sapdata2[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(3.360):/oracle/EPP/sapdata2[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.361):/oracle/EPP/sapdata2[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(3.355):/oracle/EPP/sapdata2[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata2:/dev/sapdata2lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.363):/oracle/EPP/sapdata2[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(3.365):/oracle/EPP/sapdata2[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(3.365):/oracle/EPP/sapdata2[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(3.365):/oracle/EPP/sapdata2[fs_mount:100] LV_name=sapdata2lv
+epprd_rg:cl_activate_fs(3.365):/oracle/EPP/sapdata2[fs_mount:101] getlvcb -T -A sapdata2lv
+epprd_rg:cl_activate_fs(3.366):/oracle/EPP/sapdata2[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(3.382):/oracle/EPP/sapdata2[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(3.366):/oracle/EPP/sapdata2[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.8 \n\t lvname = sapdata2lv \n\t label = /oracle/EPP/sapdata2 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:45 2022\n \t time modified = Sat Dec 17 14:48:32 2022\n '
+epprd_rg:cl_activate_fs(3.382):/oracle/EPP/sapdata2[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(3.382):/oracle/EPP/sapdata2[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.383):/oracle/EPP/sapdata2[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(3.366):/oracle/EPP/sapdata2[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.8 \n\t lvname = sapdata2lv \n\t label = /oracle/EPP/sapdata2 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:45 2022\n \t time modified = Sat Dec 17 14:48:32 2022\n '
+epprd_rg:cl_activate_fs(3.385):/oracle/EPP/sapdata2[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(3.387):/oracle/EPP/sapdata2[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(3.388):/oracle/EPP/sapdata2[fs_mount:115] clodmget -q 'name = sapdata2lv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:115] CuAt_label=/oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/sapdata2 from /etc/filesystems
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:119] : should match /oracle/EPP/sapdata2 from CuAt ODM and /oracle/EPP/sapdata2 from the LVCB
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:123] [[ /oracle/EPP/sapdata2 != /oracle/EPP/sapdata2 ]]
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:128] [[ /oracle/EPP/sapdata2 != /oracle/EPP/sapdata2 ]]
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(3.391):/oracle/EPP/sapdata2[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(3.411):/oracle/EPP/sapdata2[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.411):/oracle/EPP/sapdata2[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(3.411):/oracle/EPP/sapdata2[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/sapdata2'
+epprd_rg:cl_activate_fs(3.411):/oracle/EPP/sapdata2[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(3.411):/oracle/EPP/sapdata2[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(3.437):/oracle/EPP/sapdata2[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(3.440):/oracle/EPP/sapdata2[amlog_trace:319] DATE=2023-01-28T17:10:43.375387
+epprd_rg:cl_activate_fs(3.440):/oracle/EPP/sapdata2[amlog_trace:320] echo '|2023-01-28T17:10:43.375387|INFO: Activating Filesystem|/oracle/EPP/sapdata2'
+epprd_rg:cl_activate_fs(3.440):/oracle/EPP/sapdata2[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(3.440):/oracle/EPP/sapdata2[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(3.442):/oracle/EPP/sapdata2[fs_mount:162] : Try to mount filesystem /oracle/EPP/sapdata2 at Jan 28 17:10:43.000
+epprd_rg:cl_activate_fs(3.442):/oracle/EPP/sapdata2[fs_mount:163] mount /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs(3.453):/oracle/EPP/sapdata2[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(3.453):/oracle/EPP/sapdata2[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(3.453):/oracle/EPP/sapdata2[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(3.453):/oracle/EPP/sapdata2[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/sapdata2'
+epprd_rg:cl_activate_fs(3.453):/oracle/EPP/sapdata2[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(3.454):/oracle/EPP/sapdata2[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(3.480):/oracle/EPP/sapdata2[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[amlog_trace:319] DATE=2023-01-28T17:10:43.418289
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[amlog_trace:320] echo '|2023-01-28T17:10:43.418289|INFO: Activating Filesystems completed|/oracle/EPP/sapdata2'
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(3.483):/oracle/EPP/sapdata2[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(3.484):/oracle/EPP/sapdata2[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(3.487):/oracle/EPP/sapdata2[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(3.366):/oracle/EPP/sapdata2[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.8 \n\t lvname = sapdata2lv \n\t label = /oracle/EPP/sapdata2 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:45 2022\n \t time modified = Sat Dec 17 14:48:32 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(3.489):/oracle/EPP/sapdata2[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs(3.490):/oracle/EPP/sapdata2[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/sapdata2 is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata2[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[activate_fs_process_group:540] fs_mount /oracle/EPP/sapdata3 fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:69] FS=/oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(3.637):/oracle/EPP/sapdata3[fs_mount:86] lsfs -c /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs(3.638):/oracle/EPP/sapdata3[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(3.642):/oracle/EPP/sapdata3[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(3.638):/oracle/EPP/sapdata3[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata3:/dev/sapdata3lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.643):/oracle/EPP/sapdata3[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(3.643):/oracle/EPP/sapdata3[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.643):/oracle/EPP/sapdata3[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(3.638):/oracle/EPP/sapdata3[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata3:/dev/sapdata3lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.645):/oracle/EPP/sapdata3[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(3.647):/oracle/EPP/sapdata3[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(3.647):/oracle/EPP/sapdata3[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(3.647):/oracle/EPP/sapdata3[fs_mount:100] LV_name=sapdata3lv
+epprd_rg:cl_activate_fs(3.647):/oracle/EPP/sapdata3[fs_mount:101] getlvcb -T -A sapdata3lv
+epprd_rg:cl_activate_fs(3.648):/oracle/EPP/sapdata3[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(3.665):/oracle/EPP/sapdata3[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(3.648):/oracle/EPP/sapdata3[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.9 \n\t lvname = sapdata3lv \n\t label = /oracle/EPP/sapdata3 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:46 2022\n \t time modified = Sat Dec 17 14:48:33 2022\n '
+epprd_rg:cl_activate_fs(3.665):/oracle/EPP/sapdata3[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(3.665):/oracle/EPP/sapdata3[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.666):/oracle/EPP/sapdata3[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(3.648):/oracle/EPP/sapdata3[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.9 \n\t lvname = sapdata3lv \n\t label = /oracle/EPP/sapdata3 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:46 2022\n \t time modified = Sat Dec 17 14:48:33 2022\n '
+epprd_rg:cl_activate_fs(3.668):/oracle/EPP/sapdata3[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(3.670):/oracle/EPP/sapdata3[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(3.670):/oracle/EPP/sapdata3[fs_mount:115] clodmget -q 'name = sapdata3lv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(3.673):/oracle/EPP/sapdata3[fs_mount:115] CuAt_label=/oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs(3.673):/oracle/EPP/sapdata3[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/sapdata3 from /etc/filesystems
+epprd_rg:cl_activate_fs(3.673):/oracle/EPP/sapdata3[fs_mount:119] : should match /oracle/EPP/sapdata3 from CuAt ODM and /oracle/EPP/sapdata3 from the LVCB
+epprd_rg:cl_activate_fs(3.673):/oracle/EPP/sapdata3[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(3.673):/oracle/EPP/sapdata3[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(3.674):/oracle/EPP/sapdata3[fs_mount:123] [[ /oracle/EPP/sapdata3 != /oracle/EPP/sapdata3 ]]
+epprd_rg:cl_activate_fs(3.674):/oracle/EPP/sapdata3[fs_mount:128] [[ /oracle/EPP/sapdata3 != /oracle/EPP/sapdata3 ]]
+epprd_rg:cl_activate_fs(3.674):/oracle/EPP/sapdata3[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(3.674):/oracle/EPP/sapdata3[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(3.674):/oracle/EPP/sapdata3[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(3.693):/oracle/EPP/sapdata3[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.693):/oracle/EPP/sapdata3[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(3.693):/oracle/EPP/sapdata3[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/sapdata3'
+epprd_rg:cl_activate_fs(3.693):/oracle/EPP/sapdata3[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(3.694):/oracle/EPP/sapdata3[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(3.719):/oracle/EPP/sapdata3[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(3.722):/oracle/EPP/sapdata3[amlog_trace:319] DATE=2023-01-28T17:10:43.657441
+epprd_rg:cl_activate_fs(3.722):/oracle/EPP/sapdata3[amlog_trace:320] echo '|2023-01-28T17:10:43.657441|INFO: Activating Filesystem|/oracle/EPP/sapdata3'
+epprd_rg:cl_activate_fs(3.722):/oracle/EPP/sapdata3[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(3.722):/oracle/EPP/sapdata3[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(3.724):/oracle/EPP/sapdata3[fs_mount:162] : Try to mount filesystem /oracle/EPP/sapdata3 at Jan 28 17:10:43.000
+epprd_rg:cl_activate_fs(3.724):/oracle/EPP/sapdata3[fs_mount:163] mount /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs(3.735):/oracle/EPP/sapdata3[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(3.735):/oracle/EPP/sapdata3[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(3.735):/oracle/EPP/sapdata3[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(3.735):/oracle/EPP/sapdata3[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/sapdata3'
+epprd_rg:cl_activate_fs(3.735):/oracle/EPP/sapdata3[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(3.736):/oracle/EPP/sapdata3[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(3.761):/oracle/EPP/sapdata3[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[amlog_trace:319] DATE=2023-01-28T17:10:43.699899
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[amlog_trace:320] echo '|2023-01-28T17:10:43.699899|INFO: Activating Filesystems completed|/oracle/EPP/sapdata3'
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(3.764):/oracle/EPP/sapdata3[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(3.765):/oracle/EPP/sapdata3[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(3.768):/oracle/EPP/sapdata3[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(3.648):/oracle/EPP/sapdata3[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.9 \n\t lvname = sapdata3lv \n\t label = /oracle/EPP/sapdata3 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:46 2022\n \t time modified = Sat Dec 17 14:48:33 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(3.770):/oracle/EPP/sapdata3[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs(3.771):/oracle/EPP/sapdata3[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/sapdata3 is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata3[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[activate_fs_process_group:540] fs_mount /oracle/EPP/sapdata4 fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:69] FS=/oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(3.918):/oracle/EPP/sapdata4[fs_mount:86] lsfs -c /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs(3.919):/oracle/EPP/sapdata4[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(3.924):/oracle/EPP/sapdata4[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(3.919):/oracle/EPP/sapdata4[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata4:/dev/sapdata4lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.924):/oracle/EPP/sapdata4[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(3.924):/oracle/EPP/sapdata4[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.925):/oracle/EPP/sapdata4[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(3.919):/oracle/EPP/sapdata4[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata4:/dev/sapdata4lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(3.927):/oracle/EPP/sapdata4[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(3.929):/oracle/EPP/sapdata4[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(3.929):/oracle/EPP/sapdata4[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(3.929):/oracle/EPP/sapdata4[fs_mount:100] LV_name=sapdata4lv
+epprd_rg:cl_activate_fs(3.929):/oracle/EPP/sapdata4[fs_mount:101] getlvcb -T -A sapdata4lv
+epprd_rg:cl_activate_fs(3.930):/oracle/EPP/sapdata4[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(3.947):/oracle/EPP/sapdata4[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(3.930):/oracle/EPP/sapdata4[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.10 \n\t lvname = sapdata4lv \n\t label = /oracle/EPP/sapdata4 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:46 2022\n \t time modified = Sat Dec 17 14:48:34 2022\n '
+epprd_rg:cl_activate_fs(3.947):/oracle/EPP/sapdata4[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(3.947):/oracle/EPP/sapdata4[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(3.948):/oracle/EPP/sapdata4[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(3.930):/oracle/EPP/sapdata4[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.10 \n\t lvname = sapdata4lv \n\t label = /oracle/EPP/sapdata4 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:46 2022\n \t time modified = Sat Dec 17 14:48:34 2022\n '
+epprd_rg:cl_activate_fs(3.950):/oracle/EPP/sapdata4[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(3.952):/oracle/EPP/sapdata4[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(3.952):/oracle/EPP/sapdata4[fs_mount:115] clodmget -q 'name = sapdata4lv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:115] CuAt_label=/oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/sapdata4 from /etc/filesystems
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:119] : should match /oracle/EPP/sapdata4 from CuAt ODM and /oracle/EPP/sapdata4 from the LVCB
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:123] [[ /oracle/EPP/sapdata4 != /oracle/EPP/sapdata4 ]]
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:128] [[ /oracle/EPP/sapdata4 != /oracle/EPP/sapdata4 ]]
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(3.955):/oracle/EPP/sapdata4[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(3.975):/oracle/EPP/sapdata4[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(3.975):/oracle/EPP/sapdata4[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(3.975):/oracle/EPP/sapdata4[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/sapdata4'
+epprd_rg:cl_activate_fs(3.975):/oracle/EPP/sapdata4[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(3.976):/oracle/EPP/sapdata4[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(4.001):/oracle/EPP/sapdata4[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(4.004):/oracle/EPP/sapdata4[amlog_trace:319] DATE=2023-01-28T17:10:43.939669
+epprd_rg:cl_activate_fs(4.004):/oracle/EPP/sapdata4[amlog_trace:320] echo '|2023-01-28T17:10:43.939669|INFO: Activating Filesystem|/oracle/EPP/sapdata4'
+epprd_rg:cl_activate_fs(4.004):/oracle/EPP/sapdata4[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(4.004):/oracle/EPP/sapdata4[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(4.007):/oracle/EPP/sapdata4[fs_mount:162] : Try to mount filesystem /oracle/EPP/sapdata4 at Jan 28 17:10:43.000
+epprd_rg:cl_activate_fs(4.007):/oracle/EPP/sapdata4[fs_mount:163] mount /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs(4.018):/oracle/EPP/sapdata4[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(4.018):/oracle/EPP/sapdata4[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(4.018):/oracle/EPP/sapdata4[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(4.018):/oracle/EPP/sapdata4[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/sapdata4'
+epprd_rg:cl_activate_fs(4.018):/oracle/EPP/sapdata4[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(4.018):/oracle/EPP/sapdata4[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(4.044):/oracle/EPP/sapdata4[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[amlog_trace:319] DATE=2023-01-28T17:10:43.982442
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[amlog_trace:320] echo '|2023-01-28T17:10:43.982442|INFO: Activating Filesystems completed|/oracle/EPP/sapdata4'
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(4.047):/oracle/EPP/sapdata4[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(4.048):/oracle/EPP/sapdata4[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(4.051):/oracle/EPP/sapdata4[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(3.930):/oracle/EPP/sapdata4[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.10 \n\t lvname = sapdata4lv \n\t label = /oracle/EPP/sapdata4 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:46 2022\n \t time modified = Sat Dec 17 14:48:34 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(4.053):/oracle/EPP/sapdata4[fs_mount:249] chfs -a mountguard=yes /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs(4.054):/oracle/EPP/sapdata4[fs_mount:249] CLUSTER_OVERRIDE=yes
/oracle/EPP/sapdata4 is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(4.200):/oracle/EPP/sapdata4[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(4.200):/sapmnt[activate_fs_process_group:527] PS4_LOOP=/sapmnt
+epprd_rg:cl_activate_fs(4.200):/sapmnt[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(4.200):/sapmnt[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(4.200):/sapmnt[activate_fs_process_group:540] fs_mount /sapmnt fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:69] FS=/sapmnt
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:81] : Here check to see if the information in /etc/filesystems for /sapmnt
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(4.200):/sapmnt[fs_mount:86] lsfs -c /sapmnt
+epprd_rg:cl_activate_fs(4.201):/sapmnt[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(4.206):/sapmnt[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(4.201):/sapmnt[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/sapmnt:/dev/sapmntlv:jfs2:::20971520:rw:no:no'
+epprd_rg:cl_activate_fs(4.206):/sapmnt[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(4.206):/sapmnt[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(4.207):/sapmnt[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(4.201):/sapmnt[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/sapmnt:/dev/sapmntlv:jfs2:::20971520:rw:no:no'
+epprd_rg:cl_activate_fs(4.209):/sapmnt[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(4.211):/sapmnt[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(4.211):/sapmnt[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(4.211):/sapmnt[fs_mount:100] LV_name=sapmntlv
+epprd_rg:cl_activate_fs(4.211):/sapmnt[fs_mount:101] getlvcb -T -A sapmntlv
+epprd_rg:cl_activate_fs(4.212):/sapmnt[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(4.228):/sapmnt[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(4.212):/sapmnt[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.3 \n\t lvname = sapmntlv \n\t label = /sapmnt \n\t machine id = 44AF14B00 \n\t number lps = 20 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:41 2022\n \t time modified = Sat Dec 17 14:48:08 2022\n '
+epprd_rg:cl_activate_fs(4.228):/sapmnt[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(4.228):/sapmnt[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(4.229):/sapmnt[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(4.212):/sapmnt[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.3 \n\t lvname = sapmntlv \n\t label = /sapmnt \n\t machine id = 44AF14B00 \n\t number lps = 20 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:41 2022\n \t time modified = Sat Dec 17 14:48:08 2022\n '
+epprd_rg:cl_activate_fs(4.231):/sapmnt[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(4.233):/sapmnt[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(4.233):/sapmnt[fs_mount:115] clodmget -q 'name = sapmntlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:115] CuAt_label=/sapmnt
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:118] : At this point, if things are working correctly, /sapmnt from /etc/filesystems
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:119] : should match /sapmnt from CuAt ODM and /sapmnt from the LVCB
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:123] [[ /sapmnt != /sapmnt ]]
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:128] [[ /sapmnt != /sapmnt ]]
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(4.236):/sapmnt[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(4.256):/sapmnt[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(4.256):/sapmnt[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(4.256):/sapmnt[fs_mount:160] amlog_trace '' 'Activating Filesystem|/sapmnt'
+epprd_rg:cl_activate_fs(4.256):/sapmnt[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(4.257):/sapmnt[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(4.282):/sapmnt[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(4.284):/sapmnt[amlog_trace:319] DATE=2023-01-28T17:10:44.220211
+epprd_rg:cl_activate_fs(4.284):/sapmnt[amlog_trace:320] echo '|2023-01-28T17:10:44.220211|INFO: Activating Filesystem|/sapmnt'
+epprd_rg:cl_activate_fs(4.285):/sapmnt[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(4.285):/sapmnt[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(4.287):/sapmnt[fs_mount:162] : Try to mount filesystem /sapmnt at Jan 28 17:10:44.000
+epprd_rg:cl_activate_fs(4.287):/sapmnt[fs_mount:163] mount /sapmnt
+epprd_rg:cl_activate_fs(4.298):/sapmnt[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(4.298):/sapmnt[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(4.298):/sapmnt[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(4.298):/sapmnt[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/sapmnt'
+epprd_rg:cl_activate_fs(4.298):/sapmnt[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(4.299):/sapmnt[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(4.324):/sapmnt[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(4.327):/sapmnt[amlog_trace:319] DATE=2023-01-28T17:10:44.262684
+epprd_rg:cl_activate_fs(4.327):/sapmnt[amlog_trace:320] echo '|2023-01-28T17:10:44.262684|INFO: Activating Filesystems completed|/sapmnt'
+epprd_rg:cl_activate_fs(4.327):/sapmnt[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(4.327):/sapmnt[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(4.328):/sapmnt[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(4.331):/sapmnt[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(4.212):/sapmnt[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.3 \n\t lvname = sapmntlv \n\t label = /sapmnt \n\t machine id = 44AF14B00 \n\t number lps = 20 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:41 2022\n \t time modified = Sat Dec 17 14:48:08 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(4.333):/sapmnt[fs_mount:249] chfs -a mountguard=yes /sapmnt
+epprd_rg:cl_activate_fs(4.334):/sapmnt[fs_mount:249] CLUSTER_OVERRIDE=yes
/sapmnt is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(4.480):/sapmnt[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(4.480):/usr/sap[activate_fs_process_group:527] PS4_LOOP=/usr/sap
+epprd_rg:cl_activate_fs(4.480):/usr/sap[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(4.480):/usr/sap[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(4.480):/usr/sap[activate_fs_process_group:540] fs_mount /usr/sap fsck epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:69] FS=/usr/sap
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp26739098
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:81] : Here check to see if the information in /etc/filesystems for /usr/sap
+epprd_rg:cl_activate_fs(4.480):/usr/sap[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(4.481):/usr/sap[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(4.481):/usr/sap[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(4.481):/usr/sap[fs_mount:86] lsfs -c /usr/sap
+epprd_rg:cl_activate_fs(4.481):/usr/sap[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(4.486):/usr/sap[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(4.482):/usr/sap[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/usr/sap:/dev/saplv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(4.486):/usr/sap[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(4.486):/usr/sap[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(4.487):/usr/sap[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(4.482):/usr/sap[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/usr/sap:/dev/saplv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(4.489):/usr/sap[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(4.491):/usr/sap[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(4.491):/usr/sap[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(4.491):/usr/sap[fs_mount:100] LV_name=saplv
+epprd_rg:cl_activate_fs(4.491):/usr/sap[fs_mount:101] getlvcb -T -A saplv
+epprd_rg:cl_activate_fs(4.492):/usr/sap[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(4.509):/usr/sap[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(4.492):/usr/sap[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.2 \n\t lvname = saplv \n\t label = /usr/sap \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:37 2022\n \t time modified = Sat Dec 17 14:48:05 2022\n '
+epprd_rg:cl_activate_fs(4.509):/usr/sap[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(4.509):/usr/sap[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(4.510):/usr/sap[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(4.492):/usr/sap[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.2 \n\t lvname = saplv \n\t label = /usr/sap \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:37 2022\n \t time modified = Sat Dec 17 14:48:05 2022\n '
+epprd_rg:cl_activate_fs(4.512):/usr/sap[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(4.513):/usr/sap[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(4.514):/usr/sap[fs_mount:115] clodmget -q 'name = saplv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:115] CuAt_label=/usr/sap
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:118] : At this point, if things are working correctly, /usr/sap from /etc/filesystems
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:119] : should match /usr/sap from CuAt ODM and /usr/sap from the LVCB
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:123] [[ /usr/sap != /usr/sap ]]
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:128] [[ /usr/sap != /usr/sap ]]
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(4.517):/usr/sap[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(4.537):/usr/sap[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(4.537):/usr/sap[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(4.537):/usr/sap[fs_mount:160] amlog_trace '' 'Activating Filesystem|/usr/sap'
+epprd_rg:cl_activate_fs(4.537):/usr/sap[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(4.537):/usr/sap[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(4.563):/usr/sap[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(4.566):/usr/sap[amlog_trace:319] DATE=2023-01-28T17:10:44.501258
+epprd_rg:cl_activate_fs(4.566):/usr/sap[amlog_trace:320] echo '|2023-01-28T17:10:44.501258|INFO: Activating Filesystem|/usr/sap'
+epprd_rg:cl_activate_fs(4.566):/usr/sap[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(4.566):/usr/sap[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(4.568):/usr/sap[fs_mount:162] : Try to mount filesystem /usr/sap at Jan 28 17:10:44.000
+epprd_rg:cl_activate_fs(4.568):/usr/sap[fs_mount:163] mount /usr/sap
+epprd_rg:cl_activate_fs(4.579):/usr/sap[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(4.579):/usr/sap[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(4.579):/usr/sap[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(4.579):/usr/sap[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/usr/sap'
+epprd_rg:cl_activate_fs(4.579):/usr/sap[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(4.580):/usr/sap[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(4.606):/usr/sap[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(4.608):/usr/sap[amlog_trace:319] DATE=2023-01-28T17:10:44.543965
+epprd_rg:cl_activate_fs(4.608):/usr/sap[amlog_trace:320] echo '|2023-01-28T17:10:44.543965|INFO: Activating Filesystems completed|/usr/sap'
+epprd_rg:cl_activate_fs(4.608):/usr/sap[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(4.608):/usr/sap[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(4.609):/usr/sap[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(4.612):/usr/sap[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(4.614):/usr/sap[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(4.614):/usr/sap[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(4.614):/usr/sap[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(4.614):/usr/sap[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(4.614):/usr/sap[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(4.615):/usr/sap[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(4.615):/usr/sap[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(4.615):/usr/sap[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(4.492):/usr/sap[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.2 \n\t lvname = saplv \n\t label = /usr/sap \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false \n\t time created = Sat Dec 17 14:46:37 2022\n \t time modified = Sat Dec 17 14:48:05 2022\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(4.615):/usr/sap[fs_mount:249] chfs -a mountguard=yes /usr/sap
+epprd_rg:cl_activate_fs(4.615):/usr/sap[fs_mount:249] CLUSTER_OVERRIDE=yes
/usr/sap is now guarded against concurrent mounts.
+epprd_rg:cl_activate_fs(4.763):/usr/sap[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(4.763):/usr/sap[activate_fs_process_group:543] unset PS4_LOOP PS4_TIMER
+epprd_rg:cl_activate_fs[activate_fs_process_group:546] : Allow any background mount operations to finish
+epprd_rg:cl_activate_fs[activate_fs_process_group:548] wait
+epprd_rg:cl_activate_fs[activate_fs_process_group:550] : Read cluster level Preferread read option
+epprd_rg:cl_activate_fs[activate_fs_process_group:552] clodmget -n -f lvm_preferred_read HACMPcluster
+epprd_rg:cl_activate_fs[activate_fs_process_group:552] cluster_pref_read=roundrobin
+epprd_rg:cl_activate_fs[activate_fs_process_group:555] : Looping all file systems to update preferred read option of each lv.
+epprd_rg:cl_activate_fs[activate_fs_process_group:556] : By referring VG level preferred_read option or cluster level Preferred read option
+epprd_rg:cl_activate_fs[activate_fs_process_group:560] lsfs -c /board_org
+epprd_rg:cl_activate_fs[activate_fs_process_group:560] 2>& 1
+epprd_rg:cl_activate_fs[activate_fs_process_group:560] FS_info=$'+epprd_rg:cl_activate_fs[activate_fs_process_group:560] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs[activate_fs_process_group:561] RC=0
+epprd_rg:cl_activate_fs[activate_fs_process_group:562] (( 0 != 0 ))
+epprd_rg:cl_activate_fs[activate_fs_process_group:574] print -- $'+epprd_rg:cl_activate_fs[activate_fs_process_group:560] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs[activate_fs_process_group:574] tail -1
+epprd_rg:cl_activate_fs[activate_fs_process_group:574] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs[activate_fs_process_group:574] IFS=:
+epprd_rg:cl_activate_fs[activate_fs_process_group:575] LV_name=boardlv
+epprd_rg:cl_activate_fs[activate_fs_process_group:577] grep -w 'VOLUME GROUP'
+epprd_rg:cl_activate_fs[activate_fs_process_group:577] lslv -L boardlv
+epprd_rg:cl_activate_fs[activate_fs_process_group:577] LC_ALL=C
+epprd_rg:cl_activate_fs[activate_fs_process_group:577] volume_group='LOGICAL VOLUME: boardlv VOLUME GROUP: datavg'
+epprd_rg:cl_activate_fs[activate_fs_process_group:578] volume_group=datavg
+epprd_rg:cl_activate_fs[activate_fs_process_group:579] volume_group=datavg
+epprd_rg:cl_activate_fs[activate_fs_process_group:581] clodmget -n -f group -q name='VOLUME_GROUP and value=datavg' HACMPresource
+epprd_rg:cl_activate_fs[activate_fs_process_group:581] RGName=epprd_rg
+epprd_rg:cl_activate_fs[activate_fs_process_group:584] : Get the Preferred storage read option for this VG and perform chlv command
+epprd_rg:cl_activate_fs[activate_fs_process_group:586] clodmget -n -f value -q name='LVM_PREFERRED_READ and volume_group=datavg' HACMPvolumegroup
+epprd_rg:cl_activate_fs[activate_fs_process_group:586] 2> /dev/null
+epprd_rg:cl_activate_fs[activate_fs_process_group:586] PreferredReadOption=''
+epprd_rg:cl_activate_fs[activate_fs_process_group:587] [[ -z '' ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:589] PreferredReadOption=roundrobin
+epprd_rg:cl_activate_fs[activate_fs_process_group:590] [[ -z roundrobin ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:590] [[ roundrobin == roundrobin ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:593] : Both VG level and Cluster level LVM Preferred Read option chosen as roundrobin.
+epprd_rg:cl_activate_fs[activate_fs_process_group:595] chlv -R 0 boardlv
+epprd_rg:cl_activate_fs[activate_fs_process_group:596] (( 0 != 0 ))
+epprd_rg:cl_activate_fs[activate_fs_process_group:600] break
+epprd_rg:cl_activate_fs[activate_fs_process_group:670] : Update the resource manager with the state of the operation
+epprd_rg:cl_activate_fs[activate_fs_process_group:672] ALLNOERROR=All_non_error_filesystems
+epprd_rg:cl_activate_fs[activate_fs_process_group:673] cl_RMupdate resource_up All_non_error_filesystems cl_activate_fs
2023-01-28T17:10:44.996320
2023-01-28T17:10:45.000558
+epprd_rg:cl_activate_fs[activate_fs_process_group:676] : And harvest any status from the background mount operations
+epprd_rg:cl_activate_fs[activate_fs_process_group:678] [[ -f /tmp/epprd_rg_activate_fs.tmp26739098 ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:688] return 0
+epprd_rg:cl_activate_fs[activate_fs_process_resources:767] RC=0
+epprd_rg:cl_activate_fs[activate_fs_process_resources:768] (( 0 != 0 && 0 == 0 ))
+epprd_rg:cl_activate_fs[activate_fs_process_resources:772] RG_FILE_SYSTEMS=''
+epprd_rg:cl_activate_fs[activate_fs_process_resources:776] return 0
+epprd_rg:cl_activate_fs[851] STATUS=0
+epprd_rg:cl_activate_fs[873] return 0
+epprd_rg:process_resources(10.654)[process_file_systems:2648] RC=0
+epprd_rg:process_resources(10.654)[process_file_systems:2649] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources(10.655)[process_file_systems:2661] (( 0 != 0 ))
+epprd_rg:process_resources(10.655)[process_file_systems:2687] return 0
+epprd_rg:process_resources(10.655)[3483] RC=0
+epprd_rg:process_resources(10.655)[3485] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources(10.655)[3324] true
+epprd_rg:process_resources(10.655)[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources(10.655)[3328] set -a
+epprd_rg:process_resources(10.655)[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:45.014017 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources(10.674)[3329] eval JOB_TYPE=SYNC_VGS ACTION=ACQUIRE VOLUME_GROUPS='"datavg"' RESOURCE_GROUPS='"epprd_rg' '"'
+epprd_rg:process_resources(10.674)[1] JOB_TYPE=SYNC_VGS
+epprd_rg:process_resources(10.674)[1] ACTION=ACQUIRE
+epprd_rg:process_resources(10.674)[1] VOLUME_GROUPS=datavg
+epprd_rg:process_resources(10.674)[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources(10.674)[3330] RC=0
+epprd_rg:process_resources(10.674)[3331] set +a
+epprd_rg:process_resources(10.674)[3333] (( 0 != 0 ))
+epprd_rg:process_resources(10.674)[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources(10.674)[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources(10.674)[3343] export GROUPNAME
+epprd_rg:process_resources(10.674)[3353] IS_SERVICE_START=1
+epprd_rg:process_resources(10.674)[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources(10.674)[3360] [[ SYNC_VGS == RELEASE ]]
+epprd_rg:process_resources(10.674)[3360] [[ SYNC_VGS == ONLINE ]]
+epprd_rg:process_resources(10.674)[3474] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources(10.674)[3476] sync_volume_groups
+epprd_rg:process_resources(10.674)[sync_volume_groups:2699] PS4_FUNC=sync_volume_groups
+epprd_rg:process_resources(10.674)[sync_volume_groups:2699] typeset PS4_FUNC
+epprd_rg:process_resources(10.674)[sync_volume_groups:2700] [[ high == high ]]
+epprd_rg:process_resources(10.674)[sync_volume_groups:2700] set -x
+epprd_rg:process_resources(10.674)[sync_volume_groups:2701] STAT=0
+epprd_rg:process_resources(10.674)[sync_volume_groups:2704] export GROUPNAME
+epprd_rg:process_resources(10.675)[sync_volume_groups:2706] get_list_head datavg
+epprd_rg:process_resources(10.675)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(10.675)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(10.675)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(10.675)[get_list_head:60] set -x
+epprd_rg:process_resources(10.676)[get_list_head:61] echo datavg
+epprd_rg:process_resources(10.678)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(10.678)[get_list_head:61] IFS=:
+epprd_rg:process_resources(10.679)[get_list_head:62] echo datavg
+epprd_rg:process_resources(10.680)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(10.678)[sync_volume_groups:2706] read LIST_OF_VOLUME_GROUPS_FOR_RG
+epprd_rg:process_resources(10.685)[sync_volume_groups:2707] get_list_tail datavg
+epprd_rg:process_resources(10.685)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(10.686)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(10.686)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(10.686)[get_list_tail:68] set -x
+epprd_rg:process_resources(10.686)[get_list_tail:69] echo datavg
+epprd_rg:process_resources(10.689)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(10.689)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(10.689)[get_list_tail:70] echo
+epprd_rg:process_resources(10.688)[sync_volume_groups:2707] read VOLUME_GROUPS
+epprd_rg:process_resources(10.690)[sync_volume_groups:2710] : Sync the active volume groups
+epprd_rg:process_resources(10.691)[sync_volume_groups:2712] lsvg -L -o
+epprd_rg:process_resources(10.691)[sync_volume_groups:2712] 2> /tmp/lsvg.err
+epprd_rg:process_resources(10.694)[sync_volume_groups:2712] sort
+epprd_rg:process_resources(10.695)[sync_volume_groups:2712] 1> /tmp/lsvg.out.23593416
+epprd_rg:process_resources(10.702)[sync_volume_groups:2713] echo datavg
+epprd_rg:process_resources(10.705)[sync_volume_groups:2714] sort
+epprd_rg:process_resources(10.707)[sync_volume_groups:2713] tr ' ' '\n'
+epprd_rg:process_resources(10.709)[sync_volume_groups:2714] comm -12 /tmp/lsvg.out.23593416 -
+epprd_rg:process_resources(10.715)[sync_volume_groups:2718] [[ -s /tmp/lsvg.err ]]
+epprd_rg:process_resources(10.715)[sync_volume_groups:2723] rm -f /tmp/lsvg.out.23593416 /tmp/lsvg.err
+epprd_rg:process_resources(10.717)[sync_volume_groups:2716] cl_sync_vgs datavg
+epprd_rg:cl_sync_vgs[303] version=1.24.1.4
+epprd_rg:cl_sync_vgs[306] (( 1 == 0 ))
+epprd_rg:cl_sync_vgs[312] : syncing 4 stale PPs at a time seems to be a win most of the time, but
+epprd_rg:cl_sync_vgs[313] : we honor the NUM_PARALLEL_LPS value from /etc/environment, as does
+epprd_rg:cl_sync_vgs[314] : syncvg.
+epprd_rg:cl_sync_vgs[316] syncflag=''
+epprd_rg:cl_sync_vgs[316] export syncflag
+epprd_rg:cl_sync_vgs[317] PS4_LOOP=''
+epprd_rg:cl_sync_vgs[317] export PS4_LOOP
+epprd_rg:cl_sync_vgs[318] typeset -i npl
+epprd_rg:cl_sync_vgs[319] grep -q ^NUM_PARALLEL_LPS= /etc/environment
+epprd_rg:process_resources(10.734)[sync_volume_groups:2732] unset AM_SYNC_CALLED_BY
+epprd_rg:process_resources(10.734)[sync_volume_groups:2734] return 0
+epprd_rg:process_resources(10.734)[3324] true
+epprd_rg:process_resources(10.734)[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources(10.734)[3328] set -a
+epprd_rg:cl_sync_vgs[321] syncflag=-P4
+epprd_rg:cl_sync_vgs[328] echo 'NOTE: While the sync is going on, volume group can be used'
NOTE: While the sync is going on, volume group can be used
+epprd_rg:cl_sync_vgs[331] : For GLVM volume groups, read PARALLEL LPS count from HACMPresource if it is set from GUI,
+epprd_rg:cl_sync_vgs[332] : else, read from environment variables, if it is not set use 32 as default value.
+epprd_rg:cl_sync_vgs[334] clodmget -q name='GMVG_REP_RESOURCE and value=datavg' -f group HACMPresource
+epprd_rg:cl_sync_vgs[334] 2> /dev/null
+epprd_rg:process_resources(10.734)[3329] clRGPA
+epprd_rg:cl_sync_vgs[334] glvm_rg=''
+epprd_rg:cl_sync_vgs[335] [[ -n '' ]]
+epprd_rg:cl_sync_vgs[353] check_sync datavg
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:76] typeset vg_name
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:77] typeset vgid
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:78] typeset disklist
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:79] typeset lv_name
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:80] typeset -li stale_count
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:81] typeset -li mode
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:82] RC=0
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:82] typeset -li RC
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:83] typeset site_node_list
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:84] typeset site_choice
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:86] vg_name=datavg
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:87] disklist=''
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:89] getlvodm -v datavg
+epprd_rg:cl_sync_vgs(0.030):datavg[check_sync:89] vgid=00c44af100004b00000001851e9dc053
+epprd_rg:cl_sync_vgs(0.030):datavg[check_sync:92] : find disks in the VG that LVM thinks are inaccessable
+epprd_rg:cl_sync_vgs(0.030):datavg[check_sync:94] lsvg -L -p datavg
+epprd_rg:cl_sync_vgs(0.030):datavg[check_sync:94] LC_ALL=C
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:45.103825 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources(10.757)[3329] eval JOB_TYPE=EXPORT_FILESYSTEMS ACTION=ACQUIRE EXPORT_FILE_SYSTEMS='"/board_org"' EXPORT_FILE_SYSTEMS_V4='""' RESOURCE_GROUPS='"epprd_rg' '"' STABLE_STORAGE_PATH='""' IP_LABELS='"epprd:epprda:epprds"' DAEMONS='"NFS' 'RPCLOCKD"'
+epprd_rg:process_resources(10.757)[1] JOB_TYPE=EXPORT_FILESYSTEMS
+epprd_rg:process_resources(10.757)[1] ACTION=ACQUIRE
+epprd_rg:process_resources(10.757)[1] EXPORT_FILE_SYSTEMS=/board_org
+epprd_rg:process_resources(10.757)[1] EXPORT_FILE_SYSTEMS_V4=''
+epprd_rg:process_resources(10.757)[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources(10.757)[1] STABLE_STORAGE_PATH=''
+epprd_rg:process_resources(10.757)[1] IP_LABELS=epprd:epprda:epprds
+epprd_rg:process_resources(10.757)[1] DAEMONS='NFS RPCLOCKD'
+epprd_rg:process_resources(10.757)[3330] RC=0
+epprd_rg:process_resources(10.757)[3331] set +a
+epprd_rg:process_resources(10.757)[3333] (( 0 != 0 ))
+epprd_rg:process_resources(10.758)[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources(10.758)[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources(10.758)[3343] export GROUPNAME
+epprd_rg:process_resources(10.758)[3353] IS_SERVICE_START=1
+epprd_rg:process_resources(10.758)[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources(10.758)[3360] [[ EXPORT_FILESYSTEMS == RELEASE ]]
+epprd_rg:process_resources(10.758)[3360] [[ EXPORT_FILESYSTEMS == ONLINE ]]
+epprd_rg:process_resources(10.758)[3595] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources(10.758)[3597] export_filesystems
+epprd_rg:process_resources(10.758)[export_filesystems:1621] PS4_FUNC=export_filesystems
+epprd_rg:process_resources(10.758)[export_filesystems:1621] typeset PS4_FUNC
+epprd_rg:process_resources(10.758)[export_filesystems:1622] [[ high == high ]]
+epprd_rg:process_resources(10.758)[export_filesystems:1622] set -x
+epprd_rg:process_resources(10.758)[export_filesystems:1623] STAT=0
+epprd_rg:process_resources(10.758)[export_filesystems:1624] NFSSTOPPED=0
+epprd_rg:process_resources(10.758)[export_filesystems:1629] [[ NFS == RPCLOCKD ]]
+epprd_rg:process_resources(10.758)[export_filesystems:1629] [[ RPCLOCKD == RPCLOCKD ]]
+epprd_rg:process_resources(10.758)[export_filesystems:1631] stopsrc -s rpc.lockd
0513-044 The rpc.lockd Subsystem was requested to stop.
+epprd_rg:process_resources(10.771)[export_filesystems:1633] touch /tmp/.RPCLOCKDSTOPPED
+epprd_rg:process_resources(10.780)[export_filesystems:1638] : For NFSv4, cl_export_fs will use STABLE_STORAGE_PATH, which is set by
+epprd_rg:process_resources(10.780)[export_filesystems:1639] : clRGPA and can have colon-separated values for multiple RGs.
+epprd_rg:process_resources(10.780)[export_filesystems:1640] : We will save off clRGPA values in stable_storage_path and then extract
+epprd_rg:process_resources(10.780)[export_filesystems:1641] : each RG into STABLE_STORAGE_PATH for cl_unexport_fs.
+epprd_rg:process_resources(10.780)[export_filesystems:1643] stable_storage_path=''
+epprd_rg:process_resources(10.780)[export_filesystems:1643] typeset stable_storage_path
+epprd_rg:process_resources(10.780)[export_filesystems:1645] export NFSSTOPPED
+epprd_rg:process_resources(10.780)[export_filesystems:1650] export GROUPNAME
+epprd_rg:process_resources(10.781)[export_filesystems:1652] get_list_head /board_org
+epprd_rg:process_resources(10.781)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(10.781)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(10.781)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(10.781)[get_list_head:60] set -x
+epprd_rg:process_resources(10.782)[get_list_head:61] echo /board_org
+epprd_rg:process_resources(10.784)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(10.784)[get_list_head:61] IFS=:
+epprd_rg:process_resources(10.785)[get_list_head:62] echo /board_org
+epprd_rg:process_resources(10.787)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(10.784)[export_filesystems:1652] read LIST_OF_EXPORT_FILE_SYSTEMS_FOR_RG
+epprd_rg:process_resources(10.793)[export_filesystems:1653] get_list_tail /board_org
+epprd_rg:process_resources(10.793)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(10.793)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(10.793)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(10.793)[get_list_tail:68] set -x
+epprd_rg:process_resources(10.798)[get_list_tail:69] echo /board_org
+epprd_rg:process_resources(10.798)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(10.798)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(10.800)[get_list_tail:70] echo
+epprd_rg:process_resources(10.797)[export_filesystems:1653] read EXPORT_FILE_SYSTEMS
+epprd_rg:process_resources(10.802)[export_filesystems:1654] get_list_head
+epprd_rg:process_resources(10.803)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(10.803)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(10.803)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(10.803)[get_list_head:60] set -x
+epprd_rg:process_resources(10.804)[get_list_head:61] echo
+epprd_rg:process_resources(10.805)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(10.805)[get_list_head:61] IFS=:
+epprd_rg:process_resources(10.806)[get_list_head:62] echo
+epprd_rg:process_resources(10.808)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(10.805)[export_filesystems:1654] read LIST_OF_EXPORT_FILE_SYSTEMS_V4_FOR_RG
+epprd_rg:process_resources(10.813)[export_filesystems:1655] get_list_tail
+epprd_rg:process_resources(10.813)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(10.814)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(10.814)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(10.814)[get_list_tail:68] set -x
+epprd_rg:process_resources(10.816)[get_list_tail:69] echo
+epprd_rg:process_resources(10.815)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(10.815)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(10.817)[get_list_tail:70] echo
+epprd_rg:process_resources(10.815)[export_filesystems:1655] read EXPORT_FILE_SYSTEMS_V4
+epprd_rg:process_resources(10.820)[export_filesystems:1656] get_list_head
+epprd_rg:process_resources(10.820)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(10.820)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(10.820)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(10.820)[get_list_head:60] set -x
+epprd_rg:process_resources(10.821)[get_list_head:61] echo
+epprd_rg:process_resources(10.824)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(10.824)[get_list_head:61] IFS=:
+epprd_rg:process_resources(10.825)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(10.827)[get_list_head:62] echo
+epprd_rg:process_resources(10.823)[export_filesystems:1656] read STABLE_STORAGE_PATH
+epprd_rg:process_resources(10.829)[export_filesystems:1657] get_list_tail
+epprd_rg:process_resources(10.829)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(10.829)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(10.829)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(10.829)[get_list_tail:68] set -x
+epprd_rg:process_resources(10.830)[get_list_tail:69] echo
+epprd_rg:process_resources(10.834)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(10.834)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(10.834)[get_list_tail:70] echo
+epprd_rg:process_resources(10.833)[export_filesystems:1657] read stable_storage_path
+epprd_rg:process_resources(10.834)[export_filesystems:1659] cl_export_fs epprd:epprda:epprds /board_org ''
+epprd_rg:cl_export_fs[102] version=%I%
+epprd_rg:cl_export_fs[105] . /usr/es/sbin/cluster/events/utils/cl_nfs_utils
+epprd_rg:cl_export_fs[98] PROGNAME=cl_export_fs
+epprd_rg:cl_export_fs[99] [[ high == high ]]
+epprd_rg:cl_export_fs[101] set -x
+epprd_rg:cl_export_fs[102] version=%I
+epprd_rg:cl_export_fs[105] cl_exports_data=''
+epprd_rg:cl_export_fs[105] typeset cl_exports_data
+epprd_rg:cl_export_fs[106] EXPFILE=/usr/es/sbin/cluster/etc/exports
+epprd_rg:cl_export_fs[107] HOST=epprd:epprda:epprds
+epprd_rg:cl_export_fs[108] EXPORT_V3=/board_org
+epprd_rg:cl_export_fs[109] EXPORT_V4=''
+epprd_rg:cl_export_fs[111] STATUS=0
+epprd_rg:cl_export_fs[113] LIMIT=60
+epprd_rg:cl_export_fs[113] WAIT=1
+epprd_rg:cl_export_fs[113] TRY=0
+epprd_rg:cl_export_fs[113] typeset -li LIMIT WAIT TRY
+epprd_rg:cl_export_fs[115] PROC_RES=false
+epprd_rg:cl_export_fs[118] : If JOB_TYPE is set, and it does not equal to GROUP, then
+epprd_rg:cl_export_fs[119] : we are processing for process_resources
+epprd_rg:cl_export_fs[121] [[ EXPORT_FILESYSTEMS != 0 ]]
+epprd_rg:cl_export_fs[121] [[ EXPORT_FILESYSTEMS != GROUP ]]
+epprd_rg:cl_export_fs[122] PROC_RES=true
+epprd_rg:cl_export_fs[125] set -u
+epprd_rg:cl_export_fs[127] EXPFILE=/usr/es/sbin/cluster/etc/exports
+epprd_rg:cl_export_fs[129] (( 3 < 2 || 3 > 3 ))
+epprd_rg:cl_export_fs[142] DARE_EVENT=reconfig_resource_acquire
+epprd_rg:cl_export_fs[145] : Check memory to see if NFSv4 exports have been configured.
+epprd_rg:cl_export_fs[147] export_v4=''
+epprd_rg:cl_export_fs[148] [[ -z '' ]]
+epprd_rg:cl_export_fs[148] [[ rg_move == reconfig_resource_acquire ]]
+epprd_rg:cl_export_fs[158] : If we do not have NFSv4 exports configured, then determine
+epprd_rg:cl_export_fs[159] : the protocol versions from the HACMP exports file.
+epprd_rg:cl_export_fs[161] [[ -z '' ]]
+epprd_rg:cl_export_fs[161] [[ -r /usr/es/sbin/cluster/etc/exports ]]
+epprd_rg:cl_export_fs[227] /usr/sbin/bootinfo -K
+epprd_rg:cl_export_fs[227] KERNEL_BITS=64
+epprd_rg:cl_export_fs[229] subsystems='nfsd rpc.mountd'
+epprd_rg:cl_export_fs[230] [[ -n '' ]]
+epprd_rg:cl_export_fs[233] : Special processing for cross mounts of EFS keys
+epprd_rg:cl_export_fs[234] : The overmount of /var/efs must be removed prior
+epprd_rg:cl_export_fs[235] : to stopping or restarting NFS, since the SRC
+epprd_rg:cl_export_fs[236] : operations will attempt to check the EFS enablement.
+epprd_rg:cl_export_fs[238] mount
+epprd_rg:cl_export_fs[238] grep -w /var/efs
+epprd_rg:cl_export_fs[238] mounted_info=''
+epprd_rg:cl_export_fs[239] [[ -n '' ]]
+epprd_rg:cl_export_fs[295] : Kill and restart everything in '"nfsd' 'rpc.mountd"'
+epprd_rg:cl_export_fs[299] : Kill nfsd, and restart it below
+epprd_rg:cl_export_fs[306] [[ nfsd == nfsd ]]
+epprd_rg:cl_export_fs[307] [[ 64 == 64 ]]
+epprd_rg:cl_export_fs[307] [[ -x /usr/sbin/nfs4smctl ]]
+epprd_rg:cl_export_fs[308] [[ ! -s /etc/xtab ]]
+epprd_rg:cl_export_fs[311] clcheck_server nfsd
+epprd_rg:clcheck_server[118] [[ high == high ]]
+epprd_rg:clcheck_server[118] version=1.10.4.2
+epprd_rg:clcheck_server[119] cl_get_path
+epprd_rg:clcheck_server[119] HA_DIR=es
+epprd_rg:clcheck_server[121] SERVER=nfsd
+epprd_rg:clcheck_server[122] STATUS=0
+epprd_rg:clcheck_server[123] FATAL_ERROR=255
+epprd_rg:clcheck_server[124] retries=0
+epprd_rg:clcheck_server[124] typeset -li retries
+epprd_rg:clcheck_server[126] [[ -n nfsd ]]
+epprd_rg:clcheck_server[131] lssrc -s nfsd
+epprd_rg:clcheck_server[131] LC_ALL=C
+epprd_rg:clcheck_server[131] grep 'not on file'
+epprd_rg:clcheck_server[131] wc -l
+epprd_rg:clcheck_server[131] rc=' 0'
+epprd_rg:clcheck_server[133] (( 0 == 1 ))
+epprd_rg:clcheck_server[143] [[ 0 =~ 3 ]]
+epprd_rg:clcheck_server[147] lssrc -s nfsd
+epprd_rg:clcheck_server[147] 1> /dev/null 2> /dev/null
+epprd_rg:cl_sync_vgs(0.157):datavg[check_sync:94] disklist=$'datavg:\nPV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION\nhdisk2 active 199 89 40..00..00..09..40\nhdisk3 active 199 89 40..00..00..09..40\nhdisk4 active 199 88 40..00..00..08..40\nhdisk5 active 199 89 40..00..00..09..40\nhdisk6 active 199 89 40..00..00..09..40\nhdisk7 active 199 89 40..00..00..09..40\nhdisk8 active 199 89 40..00..00..09..40'
+epprd_rg:cl_sync_vgs(0.158):datavg[check_sync:95] print -- $'datavg:\nPV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION\nhdisk2 active 199 89 40..00..00..09..40\nhdisk3 active 199 89 40..00..00..09..40\nhdisk4 active 199 88 40..00..00..08..40\nhdisk5 active 199 89 40..00..00..09..40\nhdisk6 active 199 89 40..00..00..09..40\nhdisk7 active 199 89 40..00..00..09..40\nhdisk8 active 199 89 40..00..00..09..40'
+epprd_rg:cl_sync_vgs(0.160):datavg[check_sync:95] grep -w missing
+epprd_rg:cl_sync_vgs(0.162):datavg[check_sync:95] cut -f1 '-d '
+epprd_rg:cl_sync_vgs(0.165):datavg[check_sync:95] missing_disklist=''
+epprd_rg:cl_sync_vgs(0.166):datavg[check_sync:96] print -- $'datavg:\nPV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION\nhdisk2 active 199 89 40..00..00..09..40\nhdisk3 active 199 89 40..00..00..09..40\nhdisk4 active 199 88 40..00..00..08..40\nhdisk5 active 199 89 40..00..00..09..40\nhdisk6 active 199 89 40..00..00..09..40\nhdisk7 active 199 89 40..00..00..09..40\nhdisk8 active 199 89 40..00..00..09..40'
+epprd_rg:clcheck_server[161] lssrc -s nfsd
+epprd_rg:clcheck_server[161] LC_ALL=C
+epprd_rg:cl_sync_vgs(0.171):datavg[check_sync:96] grep -w removed
+epprd_rg:clcheck_server[161] egrep 'stop|active'
+epprd_rg:cl_sync_vgs(0.174):datavg[check_sync:96] cut -f1 '-d '
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:96] removed_disklist=''
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:100] : Proceeed if there are some disks that LVM thinks are inaccessable
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:102] [[ -n '' ]]
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:102] [[ -n '' ]]
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:196] : sync if any LVs in the VG that have stale partitions
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:198] (( 0 == 0 ))
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:201] : A status of 2,3,5 or 7 indicates the presence of dirty and/or stale partitions
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:213] is_start_logged=0
+epprd_rg:cl_sync_vgs(0.178):datavg[check_sync:218] at_least_one_sync_success=0
+epprd_rg:cl_sync_vgs(0.179):datavg[check_sync:219] lqueryvg -g 00c44af100004b00000001851e9dc053 -L
+epprd_rg:clcheck_server[161] check_if_down=''
+epprd_rg:clcheck_server[166] [[ -z '' ]]
+epprd_rg:clcheck_server[171] sleep 1
+epprd_rg:cl_sync_vgs(0.181):datavg[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.197):datavg[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.197):datavg.epprdaloglv[check_sync:221] PS4_LOOP=datavg.epprdaloglv
+epprd_rg:cl_sync_vgs(0.197):datavg.epprdaloglv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.197):datavg.epprdaloglv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.197):datavg.epprdaloglv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.197):datavg.epprdaloglv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.197):datavg.epprdaloglv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.197):datavg.saplv[check_sync:221] PS4_LOOP=datavg.saplv
+epprd_rg:cl_sync_vgs(0.197):datavg.saplv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.saplv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.saplv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.saplv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.saplv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.sapmntlv[check_sync:221] PS4_LOOP=datavg.sapmntlv
+epprd_rg:cl_sync_vgs(0.198):datavg.sapmntlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.sapmntlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.sapmntlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.sapmntlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.sapmntlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.oraclelv[check_sync:221] PS4_LOOP=datavg.oraclelv
+epprd_rg:cl_sync_vgs(0.198):datavg.oraclelv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.oraclelv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.oraclelv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.oraclelv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.oraclelv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.epplv[check_sync:221] PS4_LOOP=datavg.epplv
+epprd_rg:cl_sync_vgs(0.198):datavg.epplv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.epplv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.epplv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.epplv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.epplv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.oraarchlv[check_sync:221] PS4_LOOP=datavg.oraarchlv
+epprd_rg:cl_sync_vgs(0.198):datavg.oraarchlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.oraarchlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.oraarchlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.oraarchlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.oraarchlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata1lv[check_sync:221] PS4_LOOP=datavg.sapdata1lv
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata1lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata1lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata1lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata1lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata1lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata2lv[check_sync:221] PS4_LOOP=datavg.sapdata2lv
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata2lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata2lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata2lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata2lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata2lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata3lv[check_sync:221] PS4_LOOP=datavg.sapdata3lv
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata3lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata3lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata3lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata3lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata3lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata4lv[check_sync:221] PS4_LOOP=datavg.sapdata4lv
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata4lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata4lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata4lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata4lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.sapdata4lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.boardlv[check_sync:221] PS4_LOOP=datavg.boardlv
+epprd_rg:cl_sync_vgs(0.198):datavg.boardlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.boardlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.boardlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.boardlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.boardlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogAlv[check_sync:221] PS4_LOOP=datavg.origlogAlv
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogAlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogAlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogAlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogAlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogAlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogBlv[check_sync:221] PS4_LOOP=datavg.origlogBlv
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogBlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogBlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogBlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogBlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.origlogBlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogAlv[check_sync:221] PS4_LOOP=datavg.mirrlogAlv
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogAlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogAlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogAlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogAlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogAlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogBlv[check_sync:221] PS4_LOOP=datavg.mirrlogBlv
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogBlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogBlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogBlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogBlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogBlv[check_sync:268] [[ -n RG_MOVE ]]
+epprd_rg:cl_sync_vgs(0.198):datavg.mirrlogBlv[check_sync:268] (( 0 == 1 ))
+epprd_rg:cl_sync_vgs[355] exit 0
+epprd_rg:clcheck_server[172] lssrc -s nfsd
+epprd_rg:clcheck_server[172] LC_ALL=C
+epprd_rg:clcheck_server[172] egrep 'stop|active'
+epprd_rg:clcheck_server[172] check_if_down=''
+epprd_rg:clcheck_server[173] [[ -z '' ]]
+epprd_rg:clcheck_server[177] return 0
+epprd_rg:cl_export_fs[313] startsrc -s nfsd
0513-059 The nfsd Subsystem has been started. Subsystem PID is 28377402.
+epprd_rg:cl_export_fs[314] rc=0
+epprd_rg:cl_export_fs[315] (( 0 == 0 ))
+epprd_rg:cl_export_fs[317] sleep 3
+epprd_rg:cl_export_fs[318] lssrc -s nfsd
+epprd_rg:cl_export_fs[318] LC_ALL=C
+epprd_rg:cl_export_fs[318] tail +2
+epprd_rg:cl_export_fs[318] subsys_state=' nfsd nfs 28377402 active'
+epprd_rg:cl_export_fs[321] (( 0 != 0 ))
+epprd_rg:cl_export_fs[321] print -- ' nfsd nfs 28377402 active'
+epprd_rg:cl_export_fs[321] grep -qw active
+epprd_rg:cl_export_fs[329] : nfsv4 daemon not stopped due to existing mounts
+epprd_rg:cl_export_fs[330] : Turn on NFSv4 grace periods and ignore any errors.
+epprd_rg:cl_export_fs[332] chnfs -I -g on -x 1
+epprd_rg:cl_export_fs[332] ODMDIR=/etc/objrepos
0513-077 Subsystem has been changed.
0513-077 Subsystem has been changed.
+epprd_rg:cl_export_fs[299] : Kill rpc.mountd, and restart it below
+epprd_rg:cl_export_fs[306] [[ rpc.mountd == nfsd ]]
+epprd_rg:cl_export_fs[336] : Friendly stop of rpc.mountd
+epprd_rg:cl_export_fs[338] lssrc -s rpc.mountd
+epprd_rg:cl_export_fs[338] LC_ALL=C
+epprd_rg:cl_export_fs[338] tail +2
+epprd_rg:cl_export_fs[338] grep -qw active
+epprd_rg:cl_export_fs[341] : Now, wait for rpc.mountd to die
+epprd_rg:cl_export_fs[343] (( TRY=0))
+epprd_rg:cl_export_fs[343] (( 0 < 60))
+epprd_rg:cl_export_fs[345] lssrc -s rpc.mountd
+epprd_rg:cl_export_fs[345] LC_ALL=C
+epprd_rg:cl_export_fs[345] tail +2
+epprd_rg:cl_export_fs[345] subsys_state=' rpc.mountd nfs inoperative'
+epprd_rg:cl_export_fs[346] print -- ' rpc.mountd nfs inoperative'
+epprd_rg:cl_export_fs[346] grep -qw inoperative
+epprd_rg:cl_export_fs[348] [[ high == high ]]
+epprd_rg:cl_export_fs[348] set -x
+epprd_rg:cl_export_fs[349] subsys_state=inoperative
+epprd_rg:cl_export_fs[350] break
+epprd_rg:cl_export_fs[356] [[ high == high ]]
+epprd_rg:cl_export_fs[356] set -x
+epprd_rg:cl_export_fs[358] [[ inoperative != inoperative ]]
+epprd_rg:cl_export_fs[382] : If stopsrc has failed to stop rpc.mountd,
+epprd_rg:cl_export_fs[383] : use a real kill on the daemon
+epprd_rg:cl_export_fs[385] ps -eo comm,pid
+epprd_rg:cl_export_fs[385] grep -w rpc.mountd
+epprd_rg:cl_export_fs[385] grep -vw grep
+epprd_rg:cl_export_fs[385] read skip subsys_pid rest
+epprd_rg:cl_export_fs[386] [[ '' == +([0-9]) ]]
+epprd_rg:cl_export_fs[389] : If rpc.mountd has been stopped,
+epprd_rg:cl_export_fs[390] : start it back up again.
+epprd_rg:cl_export_fs[392] clcheck_server rpc.mountd
+epprd_rg:clcheck_server[118] [[ high == high ]]
+epprd_rg:clcheck_server[118] version=1.10.4.2
+epprd_rg:clcheck_server[119] cl_get_path
+epprd_rg:clcheck_server[119] HA_DIR=es
+epprd_rg:clcheck_server[121] SERVER=rpc.mountd
+epprd_rg:clcheck_server[122] STATUS=0
+epprd_rg:clcheck_server[123] FATAL_ERROR=255
+epprd_rg:clcheck_server[124] retries=0
+epprd_rg:clcheck_server[124] typeset -li retries
+epprd_rg:clcheck_server[126] [[ -n rpc.mountd ]]
+epprd_rg:clcheck_server[131] lssrc -s rpc.mountd
+epprd_rg:clcheck_server[131] LC_ALL=C
+epprd_rg:clcheck_server[131] grep 'not on file'
+epprd_rg:clcheck_server[131] wc -l
+epprd_rg:clcheck_server[131] rc=' 0'
+epprd_rg:clcheck_server[133] (( 0 == 1 ))
+epprd_rg:clcheck_server[143] [[ 0 =~ 3 ]]
+epprd_rg:clcheck_server[147] lssrc -s rpc.mountd
+epprd_rg:clcheck_server[147] 1> /dev/null 2> /dev/null
+epprd_rg:clcheck_server[161] lssrc -s rpc.mountd
+epprd_rg:clcheck_server[161] LC_ALL=C
+epprd_rg:clcheck_server[161] egrep 'stop|active'
+epprd_rg:clcheck_server[161] check_if_down=''
+epprd_rg:clcheck_server[166] [[ -z '' ]]
+epprd_rg:clcheck_server[171] sleep 1
+epprd_rg:clcheck_server[172] lssrc -s rpc.mountd
+epprd_rg:clcheck_server[172] LC_ALL=C
+epprd_rg:clcheck_server[172] egrep 'stop|active'
+epprd_rg:clcheck_server[172] check_if_down=''
+epprd_rg:clcheck_server[173] [[ -z '' ]]
+epprd_rg:clcheck_server[177] return 0
+epprd_rg:cl_export_fs[394] [[ rpc.mountd == nfsd ]]
+epprd_rg:cl_export_fs[403] : Start rpc.mountd back up again
+epprd_rg:cl_export_fs[405] startsrc -s rpc.mountd
0513-059 The rpc.mountd Subsystem has been started. Subsystem PID is 8257896.
+epprd_rg:cl_export_fs[406] rc=0
+epprd_rg:cl_export_fs[407] (( 0 == 0 ))
+epprd_rg:cl_export_fs[409] sleep 3
+epprd_rg:cl_export_fs[410] tail +2
+epprd_rg:cl_export_fs[410] lssrc -s rpc.mountd
+epprd_rg:cl_export_fs[410] LC_ALL=C
+epprd_rg:cl_export_fs[410] subsys_state=' rpc.mountd nfs 8257896 active'
+epprd_rg:cl_export_fs[413] (( 0 != 0 ))
+epprd_rg:cl_export_fs[413] print -- ' rpc.mountd nfs 8257896 active'
+epprd_rg:cl_export_fs[413] grep -qw active
+epprd_rg:cl_export_fs[431] : Set the NFSv4 nfsroot parameter. This must be set prior to any
+epprd_rg:cl_export_fs[432] : NFS exports that use the exname option, and cannot be set to a new
+epprd_rg:cl_export_fs[433] : value if any exname exports already exist. This is normally done
+epprd_rg:cl_export_fs[434] : at IPL, but rc.nfs is not run at boot when HACMP is installed.
+epprd_rg:cl_export_fs[436] [[ -n '' ]]
+epprd_rg:cl_export_fs[438] hasrv=''
+epprd_rg:cl_export_fs[440] [[ -z '' ]]
+epprd_rg:cl_export_fs[442] query=name='STABLE_STORAGE_PATH AND group=epprd_rg'
+epprd_rg:cl_export_fs[443] odmget -q name='STABLE_STORAGE_PATH AND group=epprd_rg' HACMPresource
+epprd_rg:cl_export_fs[444] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+epprd_rg:cl_export_fs[443] STABLE_STORAGE_PATH=''
+epprd_rg:cl_export_fs[447] [[ -z '' ]]
+epprd_rg:cl_export_fs[449] STABLE_STORAGE_PATH=/var/adm/nfsv4.hacmp/epprd_rg
+epprd_rg:cl_export_fs[452] [[ -z '' ]]
+epprd_rg:cl_export_fs[454] query=name='STABLE_STORAGE_COOKIE AND group=epprd_rg'
+epprd_rg:cl_export_fs[455] odmget -q name='STABLE_STORAGE_COOKIE AND group=epprd_rg' HACMPresource
+epprd_rg:cl_export_fs[456] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+epprd_rg:cl_export_fs[455] STABLE_STORAGE_COOKIE=''
+epprd_rg:cl_export_fs[459] [[ -n epprd_rg ]]
+epprd_rg:cl_export_fs[461] odmget -q 'name = SERVICE_LABEL and group = epprd_rg' HACMPresource
+epprd_rg:cl_export_fs[462] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:cl_export_fs[461] SERVICE_LABEL=epprd
+epprd_rg:cl_export_fs[465] primary epprd
+epprd_rg:cl_export_fs[primary:55] echo epprd
+epprd_rg:cl_export_fs[465] primary=epprd
+epprd_rg:cl_export_fs[466] secondary epprd
+epprd_rg:cl_export_fs[secondary:74] [[ -n epprd ]]
+epprd_rg:cl_export_fs[secondary:74] shift
+epprd_rg:cl_export_fs[secondary:75] echo ''
+epprd_rg:cl_export_fs[466] secondary=''
+epprd_rg:cl_export_fs[468] nfs_node_state=''
+epprd_rg:cl_export_fs[471] : Determine if grace periods are enabled
+epprd_rg:cl_export_fs[473] ps -eo args
+epprd_rg:cl_export_fs[473] grep -w nfsd
+epprd_rg:cl_export_fs[473] grep -qw -- '-gp on'
+epprd_rg:cl_export_fs[476] gp=off
+epprd_rg:cl_export_fs[480] : We can use an NFSv4 node if grace periods are enabled, we are running a
+epprd_rg:cl_export_fs[481] : 64-bit kernel, and the nfs4smctl command exists.
+epprd_rg:cl_export_fs[483] [[ off == on ]]
+epprd_rg:cl_export_fs[487] rm -f '/var/adm/nfsv4.hacmp/epprd_rg/*'
+epprd_rg:cl_export_fs[487] 2> /dev/null
+epprd_rg:cl_export_fs[491] : If we have NFSv4 exports, then we need to configure our NFS node so that
+epprd_rg:cl_export_fs[492] : we can use stable storage. Note, NFS only supports this functionality in
+epprd_rg:cl_export_fs[493] : its 64-bit kernels.
+epprd_rg:cl_export_fs[495] [[ -n '' ]]
+epprd_rg:cl_export_fs[580] [[ '' == acquiring ]]
+epprd_rg:cl_export_fs[585] ALLEXPORTS=All_exports
+epprd_rg:cl_export_fs[587] : update resource manager with this action
+epprd_rg:cl_export_fs[589] cl_RMupdate resource_acquiring All_exports cl_export_fs
2023-01-28T17:10:54.469073
2023-01-28T17:10:54.473352
+epprd_rg:cl_export_fs[592] : Build a list of all filesystems that need to be exported, irrespective of
+epprd_rg:cl_export_fs[593] : the protocol version. Since some filesystems may be exported with multiple
+epprd_rg:cl_export_fs[594] : versions, remove any duplicates.
+epprd_rg:cl_export_fs[596] echo /board_org
+epprd_rg:cl_export_fs[596] tr ' ' '\n'
+epprd_rg:cl_export_fs[596] sort -u
+epprd_rg:cl_export_fs[596] FILESYSTEM_LIST=/board_org
+epprd_rg:cl_export_fs[599] : Loop through all of the filesystems we need to export ...
+epprd_rg:cl_export_fs[603] v3=''
+epprd_rg:cl_export_fs[604] v4=''
+epprd_rg:cl_export_fs[605] root=epprd:epprda:epprds
+epprd_rg:cl_export_fs[606] new_options=''
+epprd_rg:cl_export_fs[607] export_file_line=''
+epprd_rg:cl_export_fs[608] USING_EXPORTS_FILE=0
+epprd_rg:cl_export_fs[609] export_lines=''
+epprd_rg:cl_export_fs[610] otheroption=''
+epprd_rg:cl_export_fs[613] : Get the export line from exportfs for this export
+epprd_rg:cl_export_fs[615] exportfs
+epprd_rg:cl_export_fs[615] grep '^[[:space:]]*/board_org[[:space:]]'
+epprd_rg:cl_export_fs[615] export_line=''
+epprd_rg:cl_export_fs[617] [[ -r /usr/es/sbin/cluster/etc/exports ]]
+epprd_rg:cl_export_fs[636] : If the filesystem currently is not exported, then get the options from
+epprd_rg:cl_export_fs[637] : the exports file. We will merge these options with options specified
+epprd_rg:cl_export_fs[638] : through resource group attributes to produce the actual options we will
+epprd_rg:cl_export_fs[639] : provide to exportfs.
+epprd_rg:cl_export_fs[641] [[ -z '' ]]
+epprd_rg:cl_export_fs[643] export_line=''
+epprd_rg:cl_export_fs[644] USING_EXPORTS_FILE=1
+epprd_rg:cl_export_fs[648] : In case of multiple exports for same filesystem
+epprd_rg:cl_export_fs[649] : Process them line by line
+epprd_rg:cl_export_fs[651] set +u
+epprd_rg:cl_export_fs[652] oldifs=$' \t\n'
+epprd_rg:cl_export_fs[653] IFS=$'\n'
+epprd_rg:cl_export_fs[653] export_lines=( )
+epprd_rg:cl_export_fs[654] IFS=$' \t\n'
+epprd_rg:cl_export_fs[656] [ -n '' ]
+epprd_rg:cl_export_fs[733] set -u
+epprd_rg:cl_export_fs[736] : At this point, v3 and v4 are set based on what is actually exported
+epprd_rg:cl_export_fs[737] : or what is configured to be exported in the exports file.
+epprd_rg:cl_export_fs[740] (( USING_EXPORTS_FILE ))
+epprd_rg:cl_export_fs[742] v3=''
+epprd_rg:cl_export_fs[743] v4=''
+epprd_rg:cl_export_fs[747] : At this point, v3 and v4 are set based on what is actually exported.
+epprd_rg:cl_export_fs[748] : Now add additional versions if the resource group has them configured.
+epprd_rg:cl_export_fs[752] [[ /board_org == /board_org ]]
+epprd_rg:cl_export_fs[752] v3=:2:3
+epprd_rg:cl_export_fs[752] break
+epprd_rg:cl_export_fs[761] : Versions 2 and 3 are the default versions. Some versions of AIX do
+epprd_rg:cl_export_fs[762] : not support the vers export option, so only use the option if we are
+epprd_rg:cl_export_fs[763] : exporting a non-default value such as 4
+epprd_rg:cl_export_fs[765] [[ -n '' ]]
+epprd_rg:cl_export_fs[779] [[ -n epprd:epprda:epprds ]]
+epprd_rg:cl_export_fs[782] : If we have root priveliged clients,
+epprd_rg:cl_export_fs[783] : then add them to the option list.
+epprd_rg:cl_export_fs[785] new_options=,root=epprd:epprda:epprds
+epprd_rg:cl_export_fs[788] [[ -n '' ]]
+epprd_rg:cl_export_fs[798] : Strip off the leading comma
+epprd_rg:cl_export_fs[800] echo ,root=epprd:epprda:epprds
+epprd_rg:cl_export_fs[800] cut -d, -f2-
+epprd_rg:cl_export_fs[800] new_options=root=epprd:epprda:epprds
+epprd_rg:cl_export_fs[802] [[ -z root=epprd:epprda:epprds ]]
+epprd_rg:cl_export_fs[811] : Exporting filesystem /board_org with options root=epprd:epprda:epprds
+epprd_rg:cl_export_fs[813] exportfs -i -o root=epprd:epprda:epprds /board_org
+epprd_rg:cl_export_fs[814] RC=0
+epprd_rg:cl_export_fs[817] (( 0 != 0 ))
+epprd_rg:cl_export_fs[834] ALLNOERREXPORT=All_nonerror_exports
+epprd_rg:cl_export_fs[836] : update resource manager with results
+epprd_rg:cl_export_fs[838] cl_RMupdate resource_up All_nonerror_exports cl_export_fs
2023-01-28T17:10:54.524497
2023-01-28T17:10:54.528792
+epprd_rg:cl_export_fs[840] exit 0
+epprd_rg:process_resources(20.182)[export_filesystems:1662] RC=0
+epprd_rg:process_resources(20.182)[export_filesystems:1663] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources(20.182)[export_filesystems:1669] (( 0 != 0 ))
+epprd_rg:process_resources(20.182)[export_filesystems:1675] return 0
+epprd_rg:process_resources(20.182)[3324] true
+epprd_rg:process_resources(20.183)[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources(20.183)[3328] set -a
+epprd_rg:process_resources(20.183)[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:54.542350 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources(20.196)[3329] eval JOB_TYPE=TELINIT
+epprd_rg:process_resources(20.196)[1] JOB_TYPE=TELINIT
+epprd_rg:process_resources(20.196)[3330] RC=0
+epprd_rg:process_resources(20.196)[3331] set +a
+epprd_rg:process_resources(20.196)[3333] (( 0 != 0 ))
+epprd_rg:process_resources(20.196)[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources(20.196)[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources(20.196)[3343] export GROUPNAME
+epprd_rg:process_resources(20.196)[3353] IS_SERVICE_START=1
+epprd_rg:process_resources(20.196)[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources(20.196)[3360] [[ TELINIT == RELEASE ]]
+epprd_rg:process_resources(20.196)[3360] [[ TELINIT == ONLINE ]]
+epprd_rg:process_resources(20.196)[3435] cl_telinit
+epprd_rg:cl_telinit[178] version=%I%
+epprd_rg:cl_telinit[182] TELINIT_FILE=/usr/es/sbin/cluster/.telinit
+epprd_rg:cl_telinit[183] USE_TELINIT_FILE=/usr/es/sbin/cluster/.use_telinit
+epprd_rg:cl_telinit[185] [[ -f /usr/es/sbin/cluster/.use_telinit ]]
+epprd_rg:cl_telinit[189] USE_TELINIT=0
+epprd_rg:cl_telinit[198] [[ '' == -boot ]]
+epprd_rg:cl_telinit[236] cl_lsitab clinit
+epprd_rg:cl_telinit[236] 1> /dev/null 2>& 1
+epprd_rg:cl_telinit[239] : telinit a disabled
+epprd_rg:cl_telinit[241] return 0
+epprd_rg:process_resources(20.217)[3324] true
+epprd_rg:process_resources(20.217)[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources(20.217)[3328] set -a
+epprd_rg:process_resources(20.217)[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:54.576241 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources(20.230)[3329] eval JOB_TYPE=MOUNT_FILESYSTEMS ACTION=ACQUIRE FILE_SYSTEMS='"/board;/board_org"' RESOURCE_GROUPS='"epprd_rg' '"' NFS_NETWORKS='""' NFS_HOSTS='""' IP_LABELS='"epprd"'
+epprd_rg:process_resources(20.230)[1] JOB_TYPE=MOUNT_FILESYSTEMS
+epprd_rg:process_resources(20.230)[1] ACTION=ACQUIRE
+epprd_rg:process_resources(20.230)[1] FILE_SYSTEMS='/board;/board_org'
+epprd_rg:process_resources(20.230)[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources(20.230)[1] NFS_NETWORKS=''
+epprd_rg:process_resources(20.230)[1] NFS_HOSTS=''
+epprd_rg:process_resources(20.230)[1] IP_LABELS=epprd
+epprd_rg:process_resources(20.230)[3330] RC=0
+epprd_rg:process_resources(20.230)[3331] set +a
+epprd_rg:process_resources(20.230)[3333] (( 0 != 0 ))
+epprd_rg:process_resources(20.230)[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources(20.230)[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources(20.230)[3343] export GROUPNAME
+epprd_rg:process_resources(20.230)[3353] IS_SERVICE_START=1
+epprd_rg:process_resources(20.230)[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources(20.230)[3360] [[ MOUNT_FILESYSTEMS == RELEASE ]]
+epprd_rg:process_resources(20.230)[3360] [[ MOUNT_FILESYSTEMS == ONLINE ]]
+epprd_rg:process_resources(20.230)[3612] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources(20.230)[3614] mount_nfs_filesystems MOUNT
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1437] PS4_FUNC=mount_nfs_filesystems
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1437] typeset PS4_FUNC
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1438] [[ high == high ]]
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1438] set -x
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1440] post_event_member=FALSE
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1444] [[ epprda == epprda ]]
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1446] post_event_member=TRUE
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1447] break
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1452] : This node will not be in the resource group so do not mount filesystems.
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1454] [[ TRUE == FALSE ]]
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1459] STAT=0
+epprd_rg:process_resources(20.230)[mount_nfs_filesystems:1463] export GROUPNAME
+epprd_rg:process_resources(20.231)[mount_nfs_filesystems:1465] get_list_head '/board;/board_org'
+epprd_rg:process_resources(20.231)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(20.231)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(20.231)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(20.231)[get_list_head:60] set -x
+epprd_rg:process_resources(20.232)[get_list_head:61] echo '/board;/board_org'
+epprd_rg:process_resources(20.234)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(20.234)[get_list_head:61] IFS=:
+epprd_rg:process_resources(20.235)[get_list_head:62] echo '/board;/board_org'
+epprd_rg:process_resources(20.237)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(20.234)[mount_nfs_filesystems:1465] read LIST_OF_FILE_SYSTEMS_FOR_RG
+epprd_rg:process_resources(20.242)[mount_nfs_filesystems:1466] get_list_tail '/board;/board_org'
+epprd_rg:process_resources(20.242)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(20.242)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(20.242)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(20.242)[get_list_tail:68] set -x
+epprd_rg:process_resources(20.243)[get_list_tail:69] echo '/board;/board_org'
+epprd_rg:process_resources(20.244)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(20.244)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(20.245)[get_list_tail:70] echo
+epprd_rg:process_resources(20.246)[mount_nfs_filesystems:1466] read FILE_SYSTEMS
+epprd_rg:process_resources(20.247)[mount_nfs_filesystems:1468] get_list_head
+epprd_rg:process_resources(20.247)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(20.247)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(20.247)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(20.247)[get_list_head:60] set -x
+epprd_rg:process_resources(20.248)[get_list_head:61] echo
+epprd_rg:process_resources(20.250)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(20.250)[get_list_head:61] IFS=:
+epprd_rg:process_resources(20.251)[get_list_head:62] echo
+epprd_rg:process_resources(20.252)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(20.250)[mount_nfs_filesystems:1468] read NFS_HOST
+epprd_rg:process_resources(20.256)[mount_nfs_filesystems:1469] get_list_tail
+epprd_rg:process_resources(20.256)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(20.256)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(20.256)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(20.256)[get_list_tail:68] set -x
+epprd_rg:process_resources(20.257)[get_list_tail:69] echo
+epprd_rg:process_resources(20.260)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(20.261)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(20.261)[get_list_tail:70] echo
+epprd_rg:process_resources(20.260)[mount_nfs_filesystems:1469] read NFS_HOSTS
+epprd_rg:process_resources(20.263)[mount_nfs_filesystems:1471] get_list_head
+epprd_rg:process_resources(20.264)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(20.264)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(20.264)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(20.264)[get_list_head:60] set -x
+epprd_rg:process_resources(20.266)[get_list_head:61] echo
+epprd_rg:process_resources(20.265)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(20.265)[get_list_head:61] IFS=:
+epprd_rg:process_resources(20.268)[get_list_head:62] echo
+epprd_rg:process_resources(20.269)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(20.265)[mount_nfs_filesystems:1471] read NFS_NETWORK
+epprd_rg:process_resources(20.274)[mount_nfs_filesystems:1472] get_list_tail
+epprd_rg:process_resources(20.274)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(20.274)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(20.274)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(20.274)[get_list_tail:68] set -x
+epprd_rg:process_resources(20.276)[get_list_tail:69] echo
+epprd_rg:process_resources(20.277)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(20.277)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(20.277)[get_list_tail:70] echo
+epprd_rg:process_resources(20.275)[mount_nfs_filesystems:1472] read NFS_NETWORKS
+epprd_rg:process_resources(20.280)[mount_nfs_filesystems:1474] get_list_head epprd
+epprd_rg:process_resources(20.280)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(20.280)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(20.280)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(20.280)[get_list_head:60] set -x
+epprd_rg:process_resources(20.281)[get_list_head:61] echo epprd
+epprd_rg:process_resources(20.283)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(20.283)[get_list_head:61] IFS=:
+epprd_rg:process_resources(20.284)[get_list_head:62] echo epprd
+epprd_rg:process_resources(20.284)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(20.282)[mount_nfs_filesystems:1474] read LIST_OF_IP_LABELS_FOR_RG
+epprd_rg:process_resources(20.288)[mount_nfs_filesystems:1475] get_list_tail epprd
+epprd_rg:process_resources(20.289)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(20.289)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(20.289)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(20.289)[get_list_tail:68] set -x
+epprd_rg:process_resources(20.290)[get_list_tail:69] echo epprd
+epprd_rg:process_resources(20.293)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(20.293)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(20.293)[get_list_tail:70] echo
+epprd_rg:process_resources(20.293)[mount_nfs_filesystems:1475] read IP_LABELS
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1477] MOUNT_FILESYSTEM='/board;/board_org'
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1478] NFSMOUNT_LABEL=epprd
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1481] : Do the required NFS_mounts.
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1484] NW_NFSMOUNT_LABEL=''
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1486] [[ -z '' ]]
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1488] NFS_HOST=epprda
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1491] NFSHOST=''
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1492] [[ -n epprda ]]
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1494] [[ -n '' ]]
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1516] [[ MOUNT == REMOUNT ]]
+epprd_rg:process_resources(20.294)[mount_nfs_filesystems:1526] ping epprd 1024 1
+epprd_rg:process_resources(20.296)[mount_nfs_filesystems:1526] 1> /dev/null
+epprd_rg:process_resources(20.299)[mount_nfs_filesystems:1528] NFSHOST=epprd
+epprd_rg:process_resources(20.299)[mount_nfs_filesystems:1529] break
+epprd_rg:process_resources(20.299)[mount_nfs_filesystems:1533] [[ -n epprd ]]
+epprd_rg:process_resources(20.299)[mount_nfs_filesystems:1536] : activate_nfs will not wait for the mounts to complete
+epprd_rg:process_resources(20.299)[mount_nfs_filesystems:1538] cl_activate_nfs 1 epprd '/board;/board_org'
+epprd_rg:cl_activate_nfs[68] [[ high == high ]]
+epprd_rg:cl_activate_nfs[68] version='1.19.4.2 $Source$'
+epprd_rg:cl_activate_nfs[70] . /usr/es/sbin/cluster/events/utils/cl_nfs_utils
+epprd_rg:cl_activate_nfs[98] PROGNAME=cl_activate_nfs
+epprd_rg:cl_activate_nfs[99] [[ high == high ]]
+epprd_rg:cl_activate_nfs[101] set -x
+epprd_rg:cl_activate_nfs[102] version=%I
+epprd_rg:cl_activate_nfs[105] cl_exports_data=''
+epprd_rg:cl_activate_nfs[105] typeset cl_exports_data
+epprd_rg:cl_activate_nfs[106] EXPFILE=/usr/es/sbin/cluster/etc/exports
+epprd_rg:cl_activate_nfs[72] set -u
+epprd_rg:cl_activate_nfs[242] grep -w ^MOUNT_WLMCNTRL_SELFMANAGE /etc/environment
+epprd_rg:cl_activate_nfs[242] export eval
+epprd_rg:cl_activate_nfs[244] (( 3 < 3 ))
+epprd_rg:cl_activate_nfs[253] ATTEMPTS=1
+epprd_rg:cl_activate_nfs[253] typeset -li ATTEMPTS
+epprd_rg:cl_activate_nfs[254] HOST=epprd
+epprd_rg:cl_activate_nfs[256] shift 2
+epprd_rg:cl_activate_nfs[261] FILELIST='/board;/board_org'
+epprd_rg:cl_activate_nfs[266] print '/board;/board_org'
+epprd_rg:cl_activate_nfs[266] grep -q '\;/'
+epprd_rg:cl_activate_nfs[271] CROSSMOUNTS=TRUE
+epprd_rg:cl_activate_nfs[272] print '/board;/board_org'
+epprd_rg:cl_activate_nfs[272] /bin/sort -k 1,1 '-t;'
+epprd_rg:cl_activate_nfs[272] tr ' ' '\n'
+epprd_rg:cl_activate_nfs[272] MOUNTLIST='/board;/board_org'
+epprd_rg:cl_activate_nfs[281] ALLNFS=All_nfs_mounts
+epprd_rg:cl_activate_nfs[282] cl_RMupdate resource_acquiring All_nfs_mounts cl_activate_nfs
2023-01-28T17:10:54.705184
2023-01-28T17:10:54.709404
+epprd_rg:cl_activate_nfs[288] odmget -q name='RECOVERY_METHOD AND group=epprd_rg' HACMPresource
+epprd_rg:cl_activate_nfs[289] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+epprd_rg:cl_activate_nfs[288] METHOD=sequential
+epprd_rg:cl_activate_nfs[291] odmget -q name='EXPORT_FILESYSTEM AND group=epprd_rg' HACMPresource
+epprd_rg:cl_activate_nfs[291] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+epprd_rg:cl_activate_nfs[291] EXPORT_FILESYSTEM=/board_org
+epprd_rg:cl_activate_nfs[293] odmget -q name='EXPORT_FILESYSTEM_V4 AND group=epprd_rg' HACMPresource
+epprd_rg:cl_activate_nfs[293] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+epprd_rg:cl_activate_nfs[293] EXPORT_FILESYSTEM_V4=''
+epprd_rg:cl_activate_nfs[302] EXPFILE=/usr/es/sbin/cluster/etc/exports
+epprd_rg:cl_activate_nfs[304] [[ -z '' ]]
+epprd_rg:cl_activate_nfs[305] [[ -r /usr/es/sbin/cluster/etc/exports ]]
+epprd_rg:cl_activate_nfs[311] VERSION_SOURCE=DEFAULT
+epprd_rg:cl_activate_nfs[320] [[ DEFAULT == FILES ]]
+epprd_rg:cl_activate_nfs[377] [[ -x /usr/sbin/nfsrgyd ]]
+epprd_rg:cl_activate_nfs[378] [[ -n '' ]]
+epprd_rg:cl_activate_nfs[379] grep -q vers=4 /etc/filesystems
+epprd_rg:cl_activate_nfs[394] [[ TRUE == TRUE ]]
+epprd_rg:cl_activate_nfs[411] filesystem=/board_org
+epprd_rg:cl_activate_nfs[412] mountpoint=/board
+epprd_rg:cl_activate_nfs:/board;/board_org[429] PS4_LOOP='/board;/board_org'
+epprd_rg:cl_activate_nfs:/board;/board_org[430] [[ sequential == sequential ]]
+epprd_rg:cl_activate_nfs:/board;/board_org[432] nfs_mount 1 epprd /board_org /board
+epprd_rg:cl_activate_nfs(0.081):/board;/board_org[nfs_mount:99] (( 4 != 4 ))
+epprd_rg:cl_activate_nfs(0.081):/board;/board_org[nfs_mount:108] LIMIT=1
+epprd_rg:cl_activate_nfs(0.081):/board;/board_org[nfs_mount:108] typeset -li LIMIT
+epprd_rg:cl_activate_nfs(0.081):/board;/board_org[nfs_mount:109] HOST=epprd
+epprd_rg:cl_activate_nfs(0.081):/board;/board_org[nfs_mount:110] FileSystem=/board_org
+epprd_rg:cl_activate_nfs(0.081):/board;/board_org[nfs_mount:111] MountPoint=/board
+epprd_rg:cl_activate_nfs(0.082):/board;/board_org[nfs_mount:116] mount
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ mounted == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ --------------- == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ procfs == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ ahafs == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ /sapcd == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.084):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:119] [[ jfs2 == /board ]]
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:117] read node node_fs lcl_mount rest
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:126] vers=''
+epprd_rg:cl_activate_nfs(0.085):/board;/board_org[nfs_mount:127] [[ DEFAULT == ODM ]]
+epprd_rg:cl_activate_nfs(0.086):/board;/board_org[nfs_mount:141] lsfs -c -v nfs
+epprd_rg:cl_activate_nfs(0.089):/board;/board_org[nfs_mount:141] grep ^/board:
+epprd_rg:cl_activate_nfs(0.091):/board;/board_org[nfs_mount:141] cut -d: -f7
+epprd_rg:cl_activate_nfs(0.094):/board;/board_org[nfs_mount:141] OPTIONS=bg,soft,intr,sec=sys,rw
+epprd_rg:cl_activate_nfs(0.095):/board;/board_org[nfs_mount:142] echo bg,soft,intr,sec=sys,rw
+epprd_rg:cl_activate_nfs(0.097):/board;/board_org[nfs_mount:142] sed s/+/:/g
+epprd_rg:cl_activate_nfs(0.100):/board;/board_org[nfs_mount:142] OPTIONS=bg,soft,intr,sec=sys,rw
+epprd_rg:cl_activate_nfs(0.100):/board;/board_org[nfs_mount:144] [[ -z bg,soft,intr,sec=sys,rw ]]
+epprd_rg:cl_activate_nfs(0.100):/board;/board_org[nfs_mount:152] print bg,soft,intr,sec=sys,rw
+epprd_rg:cl_activate_nfs(0.101):/board;/board_org[nfs_mount:152] grep -q intr
+epprd_rg:cl_activate_nfs(0.104):/board;/board_org[nfs_mount:168] [[ -n '' ]]
+epprd_rg:cl_activate_nfs(0.104):/board;/board_org[nfs_mount:175] [[ sequential == sequential ]]
+epprd_rg:cl_activate_nfs(0.105):/board;/board_org[nfs_mount:177] print bg,soft,intr,sec=sys,rw
+epprd_rg:cl_activate_nfs(0.107):/board;/board_org[nfs_mount:177] sed s/bg/fg/g
+epprd_rg:cl_activate_nfs(0.109):/board;/board_org[nfs_mount:177] OPTIONS=fg,soft,intr,sec=sys,rw
+epprd_rg:cl_activate_nfs(0.110):/board;/board_org[nfs_mount:178] let LIMIT+=4
+epprd_rg:cl_activate_nfs(0.110):/board;/board_org[nfs_mount:184] typeset RC
+epprd_rg:cl_activate_nfs(0.110):/board;/board_org[nfs_mount:186] amlog_trace '' 'Activating NFS|/board_org'
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:319] cltime
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:319] DATE=2023-01-28T17:10:54.788796
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:320] echo '|2023-01-28T17:10:54.788796|INFO: Activating NFS|/board_org'
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_nfs(0.138):/board;/board_org[nfs_mount:187] (( TRIES=0))
+epprd_rg:cl_activate_nfs(0.138):/board;/board_org[nfs_mount:187] (( TRIES' 'Activating NFS|/board_org'
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:319] cltime
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:319] DATE=2023-01-28T17:10:54.861904
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:320] echo '|2023-01-28T17:10:54.861904|INFO: Activating NFS|/board_org'
+epprd_rg:cl_activate_nfs:/board;/board_org[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_nfs(0.211):/board;/board_org[nfs_mount:203] return 0
+epprd_rg:process_resources(20.515)[mount_nfs_filesystems:1540] RC=0
+epprd_rg:process_resources(20.515)[mount_nfs_filesystems:1541] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources(20.515)[mount_nfs_filesystems:1549] (( 0 != 0 ))
+epprd_rg:process_resources(20.515)[mount_nfs_filesystems:1565] return 0
+epprd_rg:process_resources(20.515)[3324] true
+epprd_rg:process_resources(20.515)[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources(20.515)[3328] set -a
+epprd_rg:process_resources(20.516)[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:10:54.874956 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources(20.528)[3329] eval JOB_TYPE=NONE
+epprd_rg:process_resources(20.528)[1] JOB_TYPE=NONE
+epprd_rg:process_resources(20.528)[3330] RC=0
+epprd_rg:process_resources(20.528)[3331] set +a
+epprd_rg:process_resources(20.528)[3333] (( 0 != 0 ))
+epprd_rg:process_resources(20.528)[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources(20.528)[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources(20.528)[3343] export GROUPNAME
+epprd_rg:process_resources(20.528)[3353] IS_SERVICE_START=1
+epprd_rg:process_resources(20.528)[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources(20.528)[3360] [[ NONE == RELEASE ]]
+epprd_rg:process_resources(20.528)[3360] [[ NONE == ONLINE ]]
+epprd_rg:process_resources(20.528)[3729] break
+epprd_rg:process_resources(20.528)[3740] : If sddsrv was turned off above, turn it back on again
+epprd_rg:process_resources(20.529)[3742] [[ FALSE == TRUE ]]
+epprd_rg:process_resources(20.529)[3747] exit 0
:rg_move[247] : unsetting AM_SYNC_CALLED_BY from $'callers environment as\n: we dont' require it after this point in execution.
:rg_move[250] unset AM_SYNC_CALLED_BY
:rg_move[253] [[ -f /tmp/.NFSSTOPPED ]]
:rg_move[274] [[ -f /tmp/.RPCLOCKDSTOPPED ]]
:rg_move[276] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[277] ATTEMPT=0
:rg_move[277] typeset -li ATTEMPT
:rg_move[278] (( ATTEMPT++ < 60 ))
:rg_move[280] : rpc.lockd status check
:rg_move[281] lssrc -s rpc.lockd
:rg_move[281] LC_ALL=C
:rg_move[281] grep stopping
:rg_move[282] (( 1 == 0 ))
:rg_move[282] break
:rg_move[285] startsrc -s rpc.lockd
0513-059 The rpc.lockd Subsystem has been started. Subsystem PID is 26804666.
:rg_move[286] rcstartsrc=0
:rg_move[287] (( 0 != 0 ))
:rg_move[293] exit 0
Jan 28 2023 17:10:54 EVENT COMPLETED: rg_move epprda 1 ACQUIRE 0
|2023-01-28T17:10:54|28698|EVENT COMPLETED: rg_move epprda 1 ACQUIRE 0|
:clevlog[amlog_trace:318] clcycle clavailability.log
:clevlog[amlog_trace:318] 1> /dev/null 2>& 1
:clevlog[amlog_trace:319] cltime
:clevlog[amlog_trace:319] DATE=2023-01-28T17:10:54.991446
:clevlog[amlog_trace:320] echo '|2023-01-28T17:10:54.991446|INFO: rg_move|epprd_rg|epprda|1|ACQUIRE|0'
:clevlog[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
:rg_move_acquire[+119] exit_status=0
:rg_move_acquire[+120] : exit status of clcallev rg_move epprda 1 ACQUIRE is: 0
:rg_move_acquire[+121] exit 0
Jan 28 2023 17:10:55 EVENT COMPLETED: rg_move_acquire epprda 1 0
|2023-01-28T17:10:55|28698|EVENT COMPLETED: rg_move_acquire epprda 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:10:55.116376
+ echo '|2023-01-28T17:10:55.116376|INFO: rg_move_acquire|epprd_rg|epprda|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 17:10:55 EVENT START: rg_move_complete epprda 1
|2023-01-28T17:10:55|28698|EVENT START: rg_move_complete epprda 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:10:55.310306
+ echo '|2023-01-28T17:10:55.310306|INFO: rg_move_complete|epprd_rg|epprda|1'
+ 1>> /var/hacmp/availability/clavailability.log
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
:get_local_nodename[56] nodename=epprda
:get_local_nodename[57] rc=0
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 0 != 0 ))
:get_local_nodename[61] : If the node name in HACMPcluster matches a configured node, we are done.
:get_local_nodename[63] clnodename
:get_local_nodename[63] grep -w epprda
:get_local_nodename[63] [[ -n epprda ]]
:get_local_nodename[65] print -- epprda
:get_local_nodename[66] exit 0
:rg_move_complete[91] version=%I%
:rg_move_complete[97] STATUS=0
:rg_move_complete[97] typeset -li STATUS
:rg_move_complete[99] [[ -z '' ]]
:rg_move_complete[101] EMULATE=REAL
:rg_move_complete[104] set -u
:rg_move_complete[106] (( 2 < 2 || 2 > 3 ))
:rg_move_complete[112] NODENAME=epprda
:rg_move_complete[112] export NODENAME
:rg_move_complete[113] RGID=1
:rg_move_complete[114] (( 2 == 3 ))
:rg_move_complete[118] RGDESTINATION=''
:rg_move_complete[122] : serial number for this event is 28698
:rg_move_complete[126] : Interpret resource group ID into a resource group name.
:rg_move_complete[128] clodmget -qid=1 -f group -n HACMPgroup
:rg_move_complete[128] eval RGNAME=epprd_rg
:rg_move_complete[1] RGNAME=epprd_rg
+epprd_rg:rg_move_complete[129] GROUPNAME=epprd_rg
+epprd_rg:rg_move_complete[131] UPDATESTATD=0
+epprd_rg:rg_move_complete[131] typeset -li UPDATESTATD
+epprd_rg:rg_move_complete[132] NFSSTOPPED=0
+epprd_rg:rg_move_complete[132] typeset -li NFSSTOPPED
+epprd_rg:rg_move_complete[133] LIMIT=60
+epprd_rg:rg_move_complete[133] WAIT=1
+epprd_rg:rg_move_complete[133] TRY=0
+epprd_rg:rg_move_complete[133] typeset -li LIMIT WAIT TRY
+epprd_rg:rg_move_complete[136] : If this is a two node cluster and exported filesystems exist, then
+epprd_rg:rg_move_complete[137] : when the cluster topology is stable notify rpc.statd of the changes.
+epprd_rg:rg_move_complete[139] clnodename
+epprd_rg:rg_move_complete[139] wc -l
+epprd_rg:rg_move_complete[139] (( 2 == 2 ))
+epprd_rg:rg_move_complete[141] clodmget -f group -n HACMPgroup
+epprd_rg:rg_move_complete[141] RESOURCE_GROUPS=epprd_rg
+epprd_rg:rg_move_complete[144] clodmget -q group='epprd_rg AND name=EXPORT_FILESYSTEM' -f value -n HACMPresource
+epprd_rg:rg_move_complete[144] EXPORTLIST=/board_org
+epprd_rg:rg_move_complete[146] [[ -n /board_org ]]
+epprd_rg:rg_move_complete[146] [[ epprd_rg == epprd_rg ]]
+epprd_rg:rg_move_complete[148] UPDATESTATD=1
+epprd_rg:rg_move_complete[149] [[ REAL == EMUL ]]
+epprd_rg:rg_move_complete[154] cl_update_statd
:cl_update_statd(0)[+174] version=%I%
:cl_update_statd(0)[+176] typeset -i RC=0
:cl_update_statd(0)[+178] LOCAL_FOUND=
:cl_update_statd(0)[+179] TWIN_NAME=
:cl_update_statd(0)[+180] [[ -z epprda ]]
:cl_update_statd(0)[+181] :cl_update_statd(0)[+181] cl_get_path -S
OP_SEP=~
:cl_update_statd(0)[+182] set -u
:cl_update_statd(0)[+187] LOCAL_FOUND=true
:cl_update_statd(0)[+194] : Make sure statd is running locally
:cl_update_statd(0)[+196] lssrc -s statd
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw inoperative
:cl_update_statd(0)[+198] cl_msg -e 0 -m 10744 %1$s[%2$d]: statd is not up on the local node \n cl_update_statd 198
:cl_msg[58] version=1.3.1.1
:cl_msg[68] getopts e:s:c:m: opt
:cl_msg[74] : Error or standard message. 0 is standard
:cl_msg[75] MSG_TYPE=0
:cl_msg[68] getopts e:s:c:m: opt
:cl_msg[78] : Message ID in given set and catalog
:cl_msg[79] MSG_ID=10744
:cl_msg[68] getopts e:s:c:m: opt
:cl_msg[87] shift 4
:cl_msg[89] : All the rest is the default message and data - '%1$s[%2$d]: statd is not up on the local node \n' cl_update_statd 198
:cl_msg[92] [[ -z '' ]]
:cl_msg[94] MSG_CAT=scripts.cat
:cl_msg[97] [[ -z 0 ]]
:cl_msg[102] [[ -z 10744 ]]
:cl_msg[107] SYSLOG_CONF=''
:cl_msg[107] typeset SYSLOG_CONF
:cl_msg[108] clgetsyslog
:cl_msg[108] SYSLOG_CONF=/etc/syslog.conf
:cl_msg[110] (( 0 != 0 ))
:cl_msg[115] : Look up the message in the catalog
:cl_msg[117] dspmsg scripts.cat 10744 '%1$s[%2$d]: statd is not up on the local node \n' cl_update_statd 198
:cl_msg[117] 2>& 1
:cl_msg[117] MSG='cl_update_statd[198]: statd is not up on the local node '
:cl_msg[120] : This is where we print out the parts of the message when we have
:cl_msg[121] : an error. We also write to the syslog if it is configured.
:cl_msg[123] (( 0 != 0 ))
:cl_msg[152] print -u2 Jan 28 2023 17:10:55 'cl_update_statd[198]:' statd is not up on the local node
Jan 28 2023 17:10:55 cl_update_statd[198]: statd is not up on the local node
:cl_msg[155] : Finally, synchronize the syslog file but only if syslog is configured and
:cl_msg[156] : the file exists.
:cl_msg[158] [[ -n '' ]]
:cl_msg[163] exit
:cl_update_statd(0)[+200] : Attempt to recover this situation by restarting statd
:cl_update_statd(0)[+202] startsrc -s rpc.statd
0513-059 The rpc.statd Subsystem has been started. Subsystem PID is 7864678.
:cl_update_statd(0)[+203] sleep 5
:cl_update_statd(5)[+207] : Get the current twin, if there is one
:cl_update_statd(5)[+209] :cl_update_statd(5)[+209] nfso -H sm_gethost
:cl_update_statd(5)[+209] 2>& 1
CURTWIN=
:cl_update_statd(5)[+210] RC=0
:cl_update_statd(5)[+212] [[ -z true ]]
:cl_update_statd(5)[+212] [[ -z ]]
:cl_update_statd(5)[+215] : Local node is no longer a cluster member, unregister its twin
:cl_update_statd(5)[+215] [[ -n ]]
:cl_update_statd(5)[+259] : RC is actually 0
:cl_update_statd(5)[+266] return 0
+epprd_rg:rg_move_complete[155] (( 0 != 0 ))
+epprd_rg:rg_move_complete[160] break
+epprd_rg:rg_move_complete[166] : Set the RESOURCE_GROUPS environment variable with the names
+epprd_rg:rg_move_complete[167] : of all resource groups participating in this event, and export
+epprd_rg:rg_move_complete[168] : them to all successive scripts.
+epprd_rg:rg_move_complete[170] set -a
+epprd_rg:rg_move_complete[171] clsetenvgrp epprda rg_move_complete epprd_rg
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move_complete epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
+epprd_rg:rg_move_complete[171] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_complete[172] RC=0
+epprd_rg:rg_move_complete[173] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_complete[1] FORCEDOWN_GROUPS=''
+epprd_rg:rg_move_complete[2] RESOURCE_GROUPS=''
+epprd_rg:rg_move_complete[3] HOMELESS_GROUPS=''
+epprd_rg:rg_move_complete[4] HOMELESS_FOLLOWER_GROUPS=''
+epprd_rg:rg_move_complete[5] ERRSTATE_GROUPS=''
+epprd_rg:rg_move_complete[6] PRINCIPAL_ACTIONS=''
+epprd_rg:rg_move_complete[7] ASSOCIATE_ACTIONS=''
+epprd_rg:rg_move_complete[8] AUXILLIARY_ACTIONS=''
+epprd_rg:rg_move_complete[8] SIBLING_GROUPS=''
+epprd_rg:rg_move_complete[9] SIBLING_NODES_BY_GROUP=''
+epprd_rg:rg_move_complete[10] SIBLING_ACQUIRING_GROUPS=''
+epprd_rg:rg_move_complete[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
+epprd_rg:rg_move_complete[12] SIBLING_RELEASING_GROUPS=''
+epprd_rg:rg_move_complete[13] SIBLING_RELEASING_NODES_BY_GROUP=''
+epprd_rg:rg_move_complete[174] set +a
+epprd_rg:rg_move_complete[175] (( 0 != 0 ))
+epprd_rg:rg_move_complete[182] : For each participating resource group, serially process the resources.
+epprd_rg:rg_move_complete[251] (( 1 == 1 ))
+epprd_rg:rg_move_complete[253] [[ REAL == EMUL ]]
+epprd_rg:rg_move_complete[259] stopsrc -s rpc.lockd
0513-044 The rpc.lockd Subsystem was requested to stop.
+epprd_rg:rg_move_complete[260] rcstopsrc=0
+epprd_rg:rg_move_complete[261] (( 0 != 0 ))
+epprd_rg:rg_move_complete[266] (( TRY=0))
+epprd_rg:rg_move_complete[266] (( 0<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 1<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 2<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 3<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 4<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 5<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 6<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 7<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 8<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 9<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 10<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 11<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 12<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 13<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 14<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 15<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 16<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 17<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 18<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 19<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 20<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 21<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 22<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 23<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 24<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 25<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z '' ]]
+epprd_rg:rg_move_complete[273] break
+epprd_rg:rg_move_complete[277] [[ ! -z '' ]]
+epprd_rg:rg_move_complete[300] : Sure that rpc.lockd stopped. Restart it.
+epprd_rg:rg_move_complete[302] startsrc -s rpc.lockd
0513-059 The rpc.lockd Subsystem has been started. Subsystem PID is 26542484.
+epprd_rg:rg_move_complete[303] rcstartsrc=0
+epprd_rg:rg_move_complete[304] (( 0 != 0 ))
+epprd_rg:rg_move_complete[365] : If the resource group in this rg_move is now homeless,
+epprd_rg:rg_move_complete[366] : then we need to put it into an error state.
+epprd_rg:rg_move_complete[368] active_node=0
+epprd_rg:rg_move_complete[428] : If the resource group in this rg_move is now homeless_secondary,
+epprd_rg:rg_move_complete[429] : then we need to put it into an errorsecondary state.
+epprd_rg:rg_move_complete[437] : Set an error state for concurrent groups that have
+epprd_rg:rg_move_complete[438] : been brought offline on this node by rg_move.
+epprd_rg:rg_move_complete[453] AM_SYNC_CALLED_BY=RG_MOVE_COMPLETE
+epprd_rg:rg_move_complete[453] export AM_SYNC_CALLED_BY
+epprd_rg:rg_move_complete[454] process_resources
:process_resources[3318] version=1.169
:process_resources[3321] STATUS=0
:process_resources[3322] sddsrv_off=FALSE
:process_resources[3324] true
:process_resources[3326] : call rgpa, and it will tell us what to do next
:process_resources[3328] set -a
:process_resources[3329] clRGPA
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa
2023-01-28T17:11:25.661963 clrgpa
:clRGPA[+55] exit 0
:process_resources[3329] eval JOB_TYPE=SYNC_VGS ACTION=ACQUIRE VOLUME_GROUPS='"datavg"' RESOURCE_GROUPS='"epprd_rg' '"'
:process_resources[1] JOB_TYPE=SYNC_VGS
:process_resources[1] ACTION=ACQUIRE
:process_resources[1] VOLUME_GROUPS=datavg
:process_resources[1] RESOURCE_GROUPS='epprd_rg '
:process_resources[3330] RC=0
:process_resources[3331] set +a
:process_resources[3333] (( 0 != 0 ))
:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ SYNC_VGS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ SYNC_VGS == ONLINE ]]
+epprd_rg:process_resources[3474] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[3476] sync_volume_groups
+epprd_rg:process_resources[sync_volume_groups:2699] PS4_FUNC=sync_volume_groups
+epprd_rg:process_resources[sync_volume_groups:2699] typeset PS4_FUNC
+epprd_rg:process_resources[sync_volume_groups:2700] [[ high == high ]]
+epprd_rg:process_resources[sync_volume_groups:2700] set -x
+epprd_rg:process_resources[sync_volume_groups:2701] STAT=0
+epprd_rg:process_resources[sync_volume_groups:2704] export GROUPNAME
+epprd_rg:process_resources[sync_volume_groups:2706] get_list_head datavg
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo datavg
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo datavg
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[sync_volume_groups:2706] read LIST_OF_VOLUME_GROUPS_FOR_RG
+epprd_rg:process_resources[sync_volume_groups:2707] get_list_tail datavg
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo datavg
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[sync_volume_groups:2707] read VOLUME_GROUPS
+epprd_rg:process_resources[sync_volume_groups:2710] : Sync the active volume groups
+epprd_rg:process_resources[sync_volume_groups:2712] lsvg -L -o
+epprd_rg:process_resources[sync_volume_groups:2712] sort
+epprd_rg:process_resources[sync_volume_groups:2712] 2> /tmp/lsvg.err
+epprd_rg:process_resources[sync_volume_groups:2712] 1> /tmp/lsvg.out.26804672
+epprd_rg:process_resources[sync_volume_groups:2713] echo datavg
+epprd_rg:process_resources[sync_volume_groups:2713] tr ' ' '\n'
+epprd_rg:process_resources[sync_volume_groups:2714] sort
+epprd_rg:process_resources[sync_volume_groups:2714] comm -12 /tmp/lsvg.out.26804672 -
+epprd_rg:process_resources[sync_volume_groups:2716] cl_sync_vgs datavg
+epprd_rg:process_resources[sync_volume_groups:2718] [[ -s /tmp/lsvg.err ]]
+epprd_rg:process_resources[sync_volume_groups:2723] rm -f /tmp/lsvg.out.26804672 /tmp/lsvg.err
+epprd_rg:cl_sync_vgs[303] version=1.24.1.4
+epprd_rg:cl_sync_vgs[306] (( 1 == 0 ))
+epprd_rg:cl_sync_vgs[312] : syncing 4 stale PPs at a time seems to be a win most of the time, but
+epprd_rg:cl_sync_vgs[313] : we honor the NUM_PARALLEL_LPS value from /etc/environment, as does
+epprd_rg:cl_sync_vgs[314] : syncvg.
+epprd_rg:cl_sync_vgs[316] syncflag=''
+epprd_rg:cl_sync_vgs[316] export syncflag
+epprd_rg:cl_sync_vgs[317] PS4_LOOP=''
+epprd_rg:cl_sync_vgs[317] export PS4_LOOP
+epprd_rg:cl_sync_vgs[318] typeset -i npl
+epprd_rg:cl_sync_vgs[319] grep -q ^NUM_PARALLEL_LPS= /etc/environment
+epprd_rg:process_resources[sync_volume_groups:2732] unset AM_SYNC_CALLED_BY
+epprd_rg:process_resources[sync_volume_groups:2734] return 0
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:cl_sync_vgs[321] syncflag=-P4
+epprd_rg:cl_sync_vgs[328] echo 'NOTE: While the sync is going on, volume group can be used'
NOTE: While the sync is going on, volume group can be used
+epprd_rg:cl_sync_vgs[331] : For GLVM volume groups, read PARALLEL LPS count from HACMPresource if it is set from GUI,
+epprd_rg:cl_sync_vgs[332] : else, read from environment variables, if it is not set use 32 as default value.
+epprd_rg:cl_sync_vgs[334] clodmget -q name='GMVG_REP_RESOURCE and value=datavg' -f group HACMPresource
+epprd_rg:cl_sync_vgs[334] 2> /dev/null
+epprd_rg:cl_sync_vgs[334] glvm_rg=''
+epprd_rg:cl_sync_vgs[335] [[ -n '' ]]
+epprd_rg:cl_sync_vgs[353] check_sync datavg
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:76] typeset vg_name
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:77] typeset vgid
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:78] typeset disklist
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:79] typeset lv_name
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:80] typeset -li stale_count
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:81] typeset -li mode
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:82] RC=0
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:82] typeset -li RC
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:83] typeset site_node_list
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:84] typeset site_choice
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:86] vg_name=datavg
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:87] disklist=''
+epprd_rg:cl_sync_vgs(0.022):datavg[check_sync:89] getlvodm -v datavg
+epprd_rg:cl_sync_vgs(0.027):datavg[check_sync:89] vgid=00c44af100004b00000001851e9dc053
+epprd_rg:cl_sync_vgs(0.027):datavg[check_sync:92] : find disks in the VG that LVM thinks are inaccessable
+epprd_rg:cl_sync_vgs(0.027):datavg[check_sync:94] lsvg -L -p datavg
+epprd_rg:cl_sync_vgs(0.027):datavg[check_sync:94] LC_ALL=C
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:11:25.749060 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=APPLICATIONS ACTION=ACQUIRE ALL_APPLICATIONS='"epprd_app"' RESOURCE_GROUPS='"epprd_rg' '"' MISCDATA='""'
+epprd_rg:process_resources[1] JOB_TYPE=APPLICATIONS
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] ALL_APPLICATIONS=epprd_app
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] MISCDATA=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ APPLICATIONS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ APPLICATIONS == ONLINE ]]
+epprd_rg:process_resources[3549] process_applications ACQUIRE
+epprd_rg:process_resources[process_applications:312] PS4_FUNC=process_applications
+epprd_rg:process_resources[process_applications:312] typeset PS4_FUNC
+epprd_rg:process_resources[process_applications:313] [[ high == high ]]
+epprd_rg:process_resources[process_applications:313] set -x
+epprd_rg:process_resources[process_applications:316] : Each subprocess will log to a file with this name and PID
+epprd_rg:process_resources[process_applications:318] TMP_FILE=/var/hacmp/log/.process_resources_applications.26804672
+epprd_rg:process_resources[process_applications:318] export TMP_FILE
+epprd_rg:process_resources[process_applications:320] rm -f '/var/hacmp/log/.process_resources_applications*'
+epprd_rg:process_resources[process_applications:322] WAITPIDS=''
+epprd_rg:process_resources[process_applications:323] LPAR_ACQUIRE_FAILED=0
+epprd_rg:process_resources[process_applications:324] LPAR_RELEASE_FAILED=0
+epprd_rg:process_resources[process_applications:325] START_STOP_FAILED=0
+epprd_rg:process_resources[process_applications:326] LIST_OF_APPS=epprd_app
+epprd_rg:process_resources[process_applications:329] : Acquire lpar resources in one-shot before starting applications
+epprd_rg:process_resources[process_applications:331] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[process_applications:333] GROUPNAME=epprd_rg
+epprd_rg:process_resources[process_applications:333] export GROUPNAME
+epprd_rg:process_resources[process_applications:334] clmanageroha -o acquire -s -l epprd_app
+epprd_rg:process_resources[process_applications:334] 3>& 2
+epprd_rg:clmanageroha[318] : version='@(#)' 5881272 43haes/usr/sbin/cluster/events/clmanageroha.sh, 61aha_r726, 2205A_aha726, May 16 2022 12:15 PM
+epprd_rg:clmanageroha[321] clodmget -n -f connection_type HACMPhmcparam
+epprd_rg:clmanageroha[321] CONN_TYPE=0
+epprd_rg:clmanageroha[321] typeset -i CONN_TYPE
+epprd_rg:clmanageroha[323] clodmget -q name='epprda and object like POWERVS_*' -nf name HACMPnode
+epprd_rg:clmanageroha[323] 2> /dev/null
+epprd_rg:clmanageroha[323] [[ -n '' ]]
+epprd_rg:clmanageroha[326] export CONN_TYPE
+epprd_rg:clmanageroha[331] roha_session_open -o acquire -s -l epprd_app
+epprd_rg:clmanageroha[roha_session_open:131] roha_session.id=27001166
+epprd_rg:clmanageroha[roha_session_open:132] date
+epprd_rg:clmanageroha[roha_session_open:132] LC_ALL=C
+epprd_rg:clmanageroha[roha_session_open:132] roha_session_log 'Open session 27001166 at Sat Jan 28 17:11:25 KORST 2023'
[ROHALOG:27001166:(0.071)] Open session 27001166 at Sat Jan 28 17:11:25 KORST 2023
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:146] roha_session.operation=acquire
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:143] roha_session.systemmirror_mode=1
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:149] roha_session.optimal_apps=epprd_app
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:163] [[ acquire != @(acquire|release|adjust) ]]
+epprd_rg:clmanageroha[roha_session_open:168] no_roha_apps=0
+epprd_rg:clmanageroha[roha_session_open:168] typeset -i no_roha_apps
+epprd_rg:clmanageroha[roha_session_open:169] need_explicit_res_rel=0
+epprd_rg:clmanageroha[roha_session_open:169] typeset -i need_explicit_res_rel
+epprd_rg:clmanageroha[roha_session_open:187] [[ -n epprd_app ]]
+epprd_rg:clmanageroha[roha_session_open:187] sort
+epprd_rg:clmanageroha[roha_session_open:187] clmgr q roha
+epprd_rg:clmanageroha[roha_session_open:187] uniq -d
+epprd_rg:cl_sync_vgs(0.185):datavg[check_sync:94] disklist=$'datavg:\nPV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION\nhdisk2 active 199 89 40..00..00..09..40\nhdisk3 active 199 89 40..00..00..09..40\nhdisk4 active 199 88 40..00..00..08..40\nhdisk5 active 199 89 40..00..00..09..40\nhdisk6 active 199 89 40..00..00..09..40\nhdisk7 active 199 89 40..00..00..09..40\nhdisk8 active 199 89 40..00..00..09..40'
+epprd_rg:cl_sync_vgs(0.192):datavg[check_sync:95] grep -w missing
+epprd_rg:cl_sync_vgs(0.193):datavg[check_sync:95] print -- $'datavg:\nPV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION\nhdisk2 active 199 89 40..00..00..09..40\nhdisk3 active 199 89 40..00..00..09..40\nhdisk4 active 199 88 40..00..00..08..40\nhdisk5 active 199 89 40..00..00..09..40\nhdisk6 active 199 89 40..00..00..09..40\nhdisk7 active 199 89 40..00..00..09..40\nhdisk8 active 199 89 40..00..00..09..40'
+epprd_rg:cl_sync_vgs(0.202):datavg[check_sync:95] cut -f1 '-d '
+epprd_rg:cl_sync_vgs(0.208):datavg[check_sync:95] missing_disklist=''
+epprd_rg:cl_sync_vgs(0.210):datavg[check_sync:96] print -- $'datavg:\nPV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION\nhdisk2 active 199 89 40..00..00..09..40\nhdisk3 active 199 89 40..00..00..09..40\nhdisk4 active 199 88 40..00..00..08..40\nhdisk5 active 199 89 40..00..00..09..40\nhdisk6 active 199 89 40..00..00..09..40\nhdisk7 active 199 89 40..00..00..09..40\nhdisk8 active 199 89 40..00..00..09..40'
+epprd_rg:cl_sync_vgs(0.215):datavg[check_sync:96] grep -w removed
+epprd_rg:cl_sync_vgs(0.225):datavg[check_sync:96] cut -f1 '-d '
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:96] removed_disklist=''
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:100] : Proceeed if there are some disks that LVM thinks are inaccessable
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:102] [[ -n '' ]]
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:102] [[ -n '' ]]
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:196] : sync if any LVs in the VG that have stale partitions
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:198] (( 0 == 0 ))
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:201] : A status of 2,3,5 or 7 indicates the presence of dirty and/or stale partitions
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:213] is_start_logged=0
+epprd_rg:cl_sync_vgs(0.237):datavg[check_sync:218] at_least_one_sync_success=0
+epprd_rg:cl_sync_vgs(0.238):datavg[check_sync:219] lqueryvg -g 00c44af100004b00000001851e9dc053 -L
+epprd_rg:cl_sync_vgs(0.241):datavg[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.311):datavg[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.311):datavg.epprdaloglv[check_sync:221] PS4_LOOP=datavg.epprdaloglv
+epprd_rg:cl_sync_vgs(0.311):datavg.epprdaloglv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.311):datavg.epprdaloglv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.311):datavg.epprdaloglv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.311):datavg.epprdaloglv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.311):datavg.epprdaloglv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.311):datavg.saplv[check_sync:221] PS4_LOOP=datavg.saplv
+epprd_rg:cl_sync_vgs(0.311):datavg.saplv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.311):datavg.saplv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.311):datavg.saplv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.311):datavg.saplv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.311):datavg.saplv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.311):datavg.sapmntlv[check_sync:221] PS4_LOOP=datavg.sapmntlv
+epprd_rg:cl_sync_vgs(0.311):datavg.sapmntlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.311):datavg.sapmntlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.311):datavg.sapmntlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.311):datavg.sapmntlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.311):datavg.sapmntlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.311):datavg.oraclelv[check_sync:221] PS4_LOOP=datavg.oraclelv
+epprd_rg:cl_sync_vgs(0.311):datavg.oraclelv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.311):datavg.oraclelv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.311):datavg.oraclelv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.311):datavg.oraclelv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.311):datavg.oraclelv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.311):datavg.epplv[check_sync:221] PS4_LOOP=datavg.epplv
+epprd_rg:cl_sync_vgs(0.311):datavg.epplv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.311):datavg.epplv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.311):datavg.epplv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.311):datavg.epplv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.311):datavg.epplv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.311):datavg.oraarchlv[check_sync:221] PS4_LOOP=datavg.oraarchlv
+epprd_rg:cl_sync_vgs(0.311):datavg.oraarchlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.311):datavg.oraarchlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.311):datavg.oraarchlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.311):datavg.oraarchlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.311):datavg.oraarchlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.311):datavg.sapdata1lv[check_sync:221] PS4_LOOP=datavg.sapdata1lv
+epprd_rg:cl_sync_vgs(0.311):datavg.sapdata1lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.311):datavg.sapdata1lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.311):datavg.sapdata1lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata1lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata1lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata2lv[check_sync:221] PS4_LOOP=datavg.sapdata2lv
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata2lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata2lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata2lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata2lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata2lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata3lv[check_sync:221] PS4_LOOP=datavg.sapdata3lv
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata3lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata3lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata3lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata3lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata3lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata4lv[check_sync:221] PS4_LOOP=datavg.sapdata4lv
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata4lv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata4lv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata4lv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata4lv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.sapdata4lv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.boardlv[check_sync:221] PS4_LOOP=datavg.boardlv
+epprd_rg:cl_sync_vgs(0.312):datavg.boardlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.boardlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.boardlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.boardlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.boardlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogAlv[check_sync:221] PS4_LOOP=datavg.origlogAlv
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogAlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogAlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogAlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogAlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogAlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogBlv[check_sync:221] PS4_LOOP=datavg.origlogBlv
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogBlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogBlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogBlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogBlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.origlogBlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogAlv[check_sync:221] PS4_LOOP=datavg.mirrlogAlv
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogAlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogAlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogAlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogAlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogAlv[check_sync:221] [[ high == high ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogBlv[check_sync:221] PS4_LOOP=datavg.mirrlogBlv
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogBlv[check_sync:222] (( 1 != 2 && 1 != 3 && 1 != 5 && 1 != 7 ))
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogBlv[check_sync:225] : Anything else indicates no stale partitions
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogBlv[check_sync:227] continue
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogBlv[check_sync:219] read lv_id lv_name lv_status
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogBlv[check_sync:268] [[ -n RG_MOVE_COMPLETE ]]
+epprd_rg:cl_sync_vgs(0.312):datavg.mirrlogBlv[check_sync:268] (( 0 == 1 ))
+epprd_rg:cl_sync_vgs[355] exit 0
+epprd_rg:clmanageroha[roha_session_open:187] echo epprd_app
+epprd_rg:clmanageroha[roha_session_open:187] sort -u
+epprd_rg:clmanageroha[roha_session_open:187] echo '\nepprd_app'
+epprd_rg:clmanageroha[roha_session_open:187] [[ -z '' ]]
+epprd_rg:clmanageroha[roha_session_open:189] roha_session_log 'INFO: No ROHA configured on applications.\n'
[ROHALOG:27001166:(0.551)] INFO: No ROHA configured on applications.
[ROHALOG:27001166:(0.551)]
+epprd_rg:clmanageroha[roha_session_open:190] no_roha_apps=1
+epprd_rg:clmanageroha[roha_session_open:195] read_tunables
+epprd_rg:clmanageroha[roha_session_open:196] echo ''
+epprd_rg:clmanageroha[roha_session_open:196] grep -q epprda
+epprd_rg:clmanageroha[roha_session_open:197] (( 1 == 0 ))
+epprd_rg:clmanageroha[roha_session_open:202] (( 1 == 1 ))
+epprd_rg:clmanageroha[roha_session_open:203] roha_session_read_odm_dynresop DLPAR_MEM
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_MEM -nf value HACMPdynresop
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] out=''
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
+epprd_rg:clmanageroha[roha_session_open:203] (( 0 == 0.00 ))
+epprd_rg:clmanageroha[roha_session_open:204] roha_session_read_odm_dynresop DLPAR_PROCS
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_PROCS -nf value HACMPdynresop
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] out=''
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
+epprd_rg:clmanageroha[roha_session_open:204] (( 0 == 0 ))
+epprd_rg:clmanageroha[roha_session_open:205] roha_session_read_odm_dynresop DLPAR_PROC_UNITS
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_PROC_UNITS -nf value HACMPdynresop
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] out=''
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
+epprd_rg:clmanageroha[roha_session_open:205] (( 0 == 0.00 ))
+epprd_rg:clmanageroha[roha_session_open:206] roha_session_log 'INFO: Nothing to be done.\n'
[ROHALOG:27001166:(0.609)] INFO: Nothing to be done.
[ROHALOG:27001166:(0.609)]
+epprd_rg:clmanageroha[roha_session_open:207] exit 0
+epprd_rg:process_resources[process_applications:335] RC=0
+epprd_rg:process_resources[process_applications:336] (( 0 != 0 ))
+epprd_rg:process_resources[process_applications:343] (( LPAR_ACQUIRE_FAILED == 0 ))
+epprd_rg:process_resources[process_applications:345] : Loop through all groups to start or stop applications
+epprd_rg:process_resources[process_applications:348] export GROUPNAME
+epprd_rg:process_resources[process_applications:351] : Break out application data
+epprd_rg:process_resources[process_applications:353] get_list_head epprd_app
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo epprd_app
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo epprd_app
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[process_applications:353] read LIST_OF_APPLICATIONS_FOR_RG
+epprd_rg:process_resources[process_applications:354] get_list_tail epprd_app
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo epprd_app
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[process_applications:354] read ALL_APPLICATIONS
+epprd_rg:process_resources[process_applications:356] get_list_head
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[process_applications:356] read MISCDATA_FOR_RG
+epprd_rg:process_resources[process_applications:357] get_list_tail
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[process_applications:357] read MISCDATA
+epprd_rg:process_resources[process_applications:359] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[process_applications:374] APPLICATIONS=epprd_app
+epprd_rg:process_resources[process_applications:374] export APPLICATIONS
+epprd_rg:process_resources[process_applications:375] MISC_DATA=''
+epprd_rg:process_resources[process_applications:375] export MISC_DATA
+epprd_rg:process_resources[process_applications:378] : Now call start_or_stop_applications_for_rg to do the app start/stop.
+epprd_rg:process_resources[process_applications:384] : Add PID of the last bg start_or_stop_applications_for_rg process to WAITPIDS.
+epprd_rg:process_resources[process_applications:386] WAITPIDS=' 27722126'
+epprd_rg:process_resources[process_applications:390] : Wait for the start_or_stop_applications_for_rg PIDs to finish.
+epprd_rg:process_resources[process_applications:393] wait 27722126
+epprd_rg:process_resources[process_applications:381] start_or_stop_applications_for_rg ACQUIRE /var/hacmp/log/.process_resources_applications.26804672.epprd_rg
+epprd_rg:process_resources[start_or_stop_applications_for_rg:248] PS4_FUNC=start_or_stop_applications_for_rg
+epprd_rg:process_resources[start_or_stop_applications_for_rg:248] typeset PS4_FUNC
+epprd_rg:process_resources[start_or_stop_applications_for_rg:249] [[ high == high ]]
+epprd_rg:process_resources[start_or_stop_applications_for_rg:249] set -x
+epprd_rg:process_resources[start_or_stop_applications_for_rg:251] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[start_or_stop_applications_for_rg:253] cmd_to_execute=start_server
+epprd_rg:process_resources[start_or_stop_applications_for_rg:259] : File name to store our exit status
+epprd_rg:process_resources[start_or_stop_applications_for_rg:261] STATUS_FILE=/var/hacmp/log/.process_resources_applications.26804672.epprd_rg
+epprd_rg:process_resources[start_or_stop_applications_for_rg:264] : Use clcallev to run the event
+epprd_rg:process_resources[start_or_stop_applications_for_rg:266] clcallev start_server epprd_app
Jan 28 2023 17:11:26 EVENT START: start_server epprd_app
|2023-01-28T17:11:26|28698|EVENT START: start_server epprd_app|
+epprd_rg:start_server[+206] version=%I%
+epprd_rg:start_server[+210] export TMP_FILE=/var/hacmp/log/.start_server.27001176
+epprd_rg:start_server[+211] export DCD=/etc/es/objrepos
+epprd_rg:start_server[+212] export ACD=/usr/es/sbin/cluster/etc/objrepos/active
+epprd_rg:start_server[+214] rm -f /var/hacmp/log/.start_server.27001176
+epprd_rg:start_server[+216] STATUS=0
+epprd_rg:start_server[+220] PROC_RES=false
+epprd_rg:start_server[+224] [[ APPLICATIONS != 0 ]]
+epprd_rg:start_server[+224] [[ APPLICATIONS != GROUP ]]
+epprd_rg:start_server[+225] PROC_RES=true
+epprd_rg:start_server[+228] set -u
+epprd_rg:start_server[+229] typeset WPARNAME EXEC WPARDIR
+epprd_rg:start_server[+230] export WPARNAME EXEC WPARDIR
+epprd_rg:start_server[+232] EXEC=
+epprd_rg:start_server[+233] WPARNAME=
+epprd_rg:start_server[+234] WPARDIR=
+epprd_rg:start_server[+237] ALLSERVERS=All_servers
+epprd_rg:start_server[+238] ALLNOERRSERV=All_nonerror_servers
+epprd_rg:start_server[+239] cl_RMupdate resource_acquiring All_servers start_server
2023-01-28T17:11:26.505465
2023-01-28T17:11:26.509687
+epprd_rg:start_server[+241] +epprd_rg:start_server[+241] clwparname epprd_rg
+epprd_rg:clwparname[38] version=1.3.1.1
+epprd_rg:clwparname[44] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clwparname[44] [[ -z '' ]]
+epprd_rg:clwparname[44] exit 0
WPARNAME=
+epprd_rg:start_server[+243] (( 0 == 0 ))
+epprd_rg:start_server[+243] [[ -n ]]
+epprd_rg:start_server[+258] start_and_monitor_server epprd_app
+epprd_rg:start_server[start_and_monitor_server+5] +epprd_rg:start_server[+261] wait
RETURN_STATUS=0
+epprd_rg:start_server[start_and_monitor_server+7] server=epprd_app
+epprd_rg:start_server[start_and_monitor_server+12] echo Checking whether epprd_app is already running...\n
Checking whether epprd_app is already running...
+epprd_rg:start_server[start_and_monitor_server+12] [[ -n ]]
+epprd_rg:start_server[start_and_monitor_server+18] cl_app_startup_monitor -s epprd_app -a
+epprd_rg:start_server[start_and_monitor_server+21] RETURN_STATUS=1
+epprd_rg:start_server[start_and_monitor_server+22] : exit status of cl_app_startup_monitor is: 1
+epprd_rg:start_server[start_and_monitor_server+22] [[ 1 == 0 ]]
+epprd_rg:start_server[start_and_monitor_server+33] echo Application monitor(s) indicate that epprd_app is not active. Continuing with application startup.\n
Application monitor(s) indicate that epprd_app is not active. Continuing with application startup.
+epprd_rg:start_server[start_and_monitor_server+42] +epprd_rg:start_server[start_and_monitor_server+42] cllsserv -cn epprd_app
+epprd_rg:start_server[start_and_monitor_server+42] cut -d: -f2
START=/etc/hacmp/epprd_start.sh
+epprd_rg:start_server[start_and_monitor_server+43] +epprd_rg:start_server[start_and_monitor_server+43] echo /etc/hacmp/epprd_start.sh
+epprd_rg:start_server[start_and_monitor_server+43] cut -d -f1
START_SCRIPT=/etc/hacmp/epprd_start.sh
+epprd_rg:start_server[start_and_monitor_server+44] +epprd_rg:start_server[start_and_monitor_server+44] cllsserv -cn epprd_app
+epprd_rg:start_server[start_and_monitor_server+44] cut -d: -f4
START_MODE=background
+epprd_rg:start_server[start_and_monitor_server+44] [[ -z background ]]
+epprd_rg:start_server[start_and_monitor_server+47] PATTERN=epprda epprd_app
+epprd_rg:start_server[start_and_monitor_server+48] RETURN_STATUS=0
+epprd_rg:start_server[start_and_monitor_server+51] amlog_trace Starting application controller in background|epprd_app
+epprd_rg:start_server[start_and_monitor_server+200] clcycle clavailability.log
+epprd_rg:start_server[start_and_monitor_server+200] 1> /dev/null 2>& 1
+epprd_rg:start_server[start_and_monitor_server+200] +epprd_rg:start_server[start_and_monitor_server+200] cltime
DATE=2023-01-28T17:11:26.560622
+epprd_rg:start_server[start_and_monitor_server+200] echo |2023-01-28T17:11:26.560622|INFO: Starting application controller in background|epprd_app
+epprd_rg:start_server[start_and_monitor_server+200] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:start_server[start_and_monitor_server+51] [[ -n ]]
+epprd_rg:start_server[start_and_monitor_server+51] [[ -z ]]
+epprd_rg:start_server[start_and_monitor_server+51] [[ -x /etc/hacmp/epprd_start.sh ]]
+epprd_rg:start_server[start_and_monitor_server+60] [ background == background ]
+epprd_rg:start_server[start_and_monitor_server+62] date
+epprd_rg:start_server[start_and_monitor_server+62] LC_ALL=C
+epprd_rg:start_server[start_and_monitor_server+62] echo Running application controller start script for epprd_app in the background at Sat Jan 28 17:11:26 KORST 2023.\n
Running application controller start script for epprd_app in the background at Sat Jan 28 17:11:26 KORST 2023.
+epprd_rg:start_server[start_and_monitor_server+63] /etc/hacmp/epprd_start.sh
+epprd_rg:start_server[start_and_monitor_server+63] ODMDIR=/etc/es/objrepos
+epprd_rg:start_server[start_and_monitor_server+62] [[ 0 != 0 ]]
+epprd_rg:start_server[start_and_monitor_server+62] [[ -n ]]
+epprd_rg:start_server[start_and_monitor_server+94] cl_app_startup_monitor -s epprd_app
+epprd_rg:start_server[start_and_monitor_server+97] RETURN_STATUS=0
+epprd_rg:start_server[start_and_monitor_server+98] : exit status of cl_app_startup_monitor is: 0
+epprd_rg:start_server[start_and_monitor_server+98] [[ 0 != 0 ]]
+epprd_rg:start_server[start_and_monitor_server+109] echo epprd_app 0
+epprd_rg:start_server[start_and_monitor_server+109] 1> /var/hacmp/log/.start_server.27001176.epprd_app
+epprd_rg:start_server[start_and_monitor_server+112] +epprd_rg:start_server[start_and_monitor_server+112] cllsserv -cn epprd_app
+epprd_rg:start_server[start_and_monitor_server+112] cut -d: -f4
START_MODE=background
+epprd_rg:start_server[start_and_monitor_server+112] [[ background == foreground ]]
+epprd_rg:start_server[start_and_monitor_server+132] return 0
+epprd_rg:start_server[+266] +epprd_rg:start_server[+266] cllsserv -cn epprd_app
+epprd_rg:start_server[+266] cut -d: -f4
START_MODE=background
+epprd_rg:start_server[+267] [ background == background ]
+epprd_rg:start_server[+269] +epprd_rg:start_server[+269] cat /var/hacmp/log/.start_server.27001176.epprd_app
+epprd_rg:start_server[+269] cut -f2 -d
SUCCESS=0
+epprd_rg:start_server[+269] [[ 0 != 0 ]]
+epprd_rg:start_server[+274] amlog_trace Starting application controller in background|epprd_app
+epprd_rg:start_server[+200] clcycle clavailability.log
+epprd_rg:start_server[+200] 1> /dev/null 2>& 1
+epprd_rg:start_server[+200] +epprd_rg:start_server[+200] cltime
DATE=2023-01-28T17:11:26.605943
+epprd_rg:start_server[+200] echo |2023-01-28T17:11:26.605943|INFO: Starting application controller in background|epprd_app
+epprd_rg:start_server[+200] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:start_server[+276] +epprd_rg:start_server[+276] clodmget -q name = epprd_app -n -f cpu_usage_monitor HACMPserver
MACTIVE=no
+epprd_rg:start_server[+276] [[ no == yes ]]
+epprd_rg:start_server[+292] +epprd_rg:start_server[+292] cat /var/hacmp/log/.start_server.27001176.epprd_app
+epprd_rg:start_server[+292] cut -f2 -d
SUCCESS=0
+epprd_rg:start_server[+292] [[ 0 != +([0-9]) ]]
+epprd_rg:start_server[+297] (( 0 != 0 ))
+epprd_rg:start_server[+303] [[ 0 == 0 ]]
+epprd_rg:start_server[+306] rm -f /var/hacmp/log/.start_server.27001176.epprd_app
+epprd_rg:start_server[+308] cl_RMupdate resource_up All_nonerror_servers start_server
2023-01-28T17:11:26.635818
2023-01-28T17:11:26.639993
+epprd_rg:start_server[+314] exit 0
Jan 28 2023 17:11:26 EVENT COMPLETED: start_server epprd_app 0
|2023-01-28T17:11:26|28698|EVENT COMPLETED: start_server epprd_app 0|
+epprd_rg:process_resources[start_or_stop_applications_for_rg:267] RC=0
+epprd_rg:process_resources[start_or_stop_applications_for_rg:269] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[start_or_stop_applications_for_rg:279] (( 0 != 0 ))
+epprd_rg:process_resources[start_or_stop_applications_for_rg:291] : Store the result for later accumulation
+epprd_rg:process_resources[start_or_stop_applications_for_rg:293] print 'epprd_rg 0'
+epprd_rg:process_resources[start_or_stop_applications_for_rg:293] 1>> /var/hacmp/log/.process_resources_applications.26804672.epprd_rg
+epprd_rg:process_resources[process_applications:396] : Look at all the status files to see if any were unsuccessful
+epprd_rg:process_resources[process_applications:399] cat /var/hacmp/log/.process_resources_applications.26804672.epprd_rg
+epprd_rg:process_resources[process_applications:399] read skip SUCCESS rest
+epprd_rg:process_resources[process_applications:401] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:411] rm -f /var/hacmp/log/.process_resources_applications.26804672.epprd_rg
+epprd_rg:process_resources[process_applications:416] : Release lpar resources in one-shot now that applications are stopped
+epprd_rg:process_resources[process_applications:418] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[process_applications:433] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:434] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:435] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:439] return 0
+epprd_rg:process_resources[3550] RC=0
+epprd_rg:process_resources[3551] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:11:26.737941 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=ONLINE RESOURCE_GROUPS='"epprd_rg"'
+epprd_rg:process_resources[1] JOB_TYPE=ONLINE
+epprd_rg:process_resources[1] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ ONLINE == RELEASE ]]
+epprd_rg:process_resources[3360] [[ ONLINE == ONLINE ]]
+epprd_rg:process_resources[3363] INFO_STRING=''
+epprd_rg:process_resources[3364] clnodename
+epprd_rg:process_resources[3373] ENV_VAR=GROUP_epprd_rg_epprda
+epprd_rg:process_resources[3374] eval 'echo $GROUP_epprd_rg_epprda'
+epprd_rg:process_resources[1] echo WILLBEUPPOSTEVENT
+epprd_rg:process_resources[3374] read ENV_VAR
+epprd_rg:process_resources[3375] [[ WILLBEUPPOSTEVENT == WILLBEUPPOSTEVENT ]]
+epprd_rg:process_resources[3376] INFO_STRING='|DESTINATION=epprda'
+epprd_rg:process_resources[3377] IS_SERVICE_STOP=0
+epprd_rg:process_resources[3379] [[ WILLBEUPPOSTEVENT == ISUPPREEVENT ]]
+epprd_rg:process_resources[3373] ENV_VAR=GROUP_epprd_rg_epprds
+epprd_rg:process_resources[3374] eval 'echo $GROUP_epprd_rg_epprds'
+epprd_rg:process_resources[1] echo
+epprd_rg:process_resources[3374] read ENV_VAR
+epprd_rg:process_resources[3375] [[ '' == WILLBEUPPOSTEVENT ]]
+epprd_rg:process_resources[3379] [[ '' == ISUPPREEVENT ]]
+epprd_rg:process_resources[3384] (( 1 == 0 && 0 ==0 ))
+epprd_rg:process_resources[3673] set_resource_group_state UP
+epprd_rg:process_resources[set_resource_group_state:82] PS4_FUNC=set_resource_group_state
+epprd_rg:process_resources[set_resource_group_state:82] typeset PS4_FUNC
+epprd_rg:process_resources[set_resource_group_state:83] [[ high == high ]]
+epprd_rg:process_resources[set_resource_group_state:83] set -x
+epprd_rg:process_resources[set_resource_group_state:84] STAT=0
+epprd_rg:process_resources[set_resource_group_state:85] new_status=UP
+epprd_rg:process_resources[set_resource_group_state:89] export GROUPNAME
+epprd_rg:process_resources[set_resource_group_state:90] [[ UP != DOWN ]]
+epprd_rg:process_resources[set_resource_group_state:92] clchdaemons -d clstrmgr_scripts -t resource_locator -n epprda -o epprd_rg -v UP
+epprd_rg:process_resources[set_resource_group_state:100] : Resource Manager Updates
+epprd_rg:process_resources[set_resource_group_state:116] cl_RMupdate rg_up epprd_rg process_resources
2023-01-28T17:11:26.776941
2023-01-28T17:11:26.781190
+epprd_rg:process_resources[set_resource_group_state:118] amlog_trace '' 'acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:318] clcycle clavailability.log
+epprd_rg:process_resources[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:process_resources[amlog_trace:319] cltime
+epprd_rg:process_resources[amlog_trace:319] DATE=2023-01-28T17:11:26.811032
+epprd_rg:process_resources[amlog_trace:320] echo '|2023-01-28T17:11:26.811032|INFO: acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:process_resources[set_resource_group_state:153] return 0
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T17:11:26.822862 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=NONE
+epprd_rg:process_resources[1] JOB_TYPE=NONE
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ NONE == RELEASE ]]
+epprd_rg:process_resources[3360] [[ NONE == ONLINE ]]
+epprd_rg:process_resources[3729] break
+epprd_rg:process_resources[3740] : If sddsrv was turned off above, turn it back on again
+epprd_rg:process_resources[3742] [[ FALSE == TRUE ]]
+epprd_rg:process_resources[3747] exit 0
+epprd_rg:rg_move_complete[455] STATUS=0
+epprd_rg:rg_move_complete[456] : The exit status of process_resources is: 0
+epprd_rg:rg_move_complete[461] unset AM_SYNC_CALLED_BY
+epprd_rg:rg_move_complete[462] [[ TRUE == TRUE ]]
+epprd_rg:rg_move_complete[491] [[ -z '' ]]
+epprd_rg:rg_move_complete[493] RESOURCE_GROUPS=epprd_rg
+epprd_rg:rg_move_complete[499] GROUPNAME=epprd_rg
+epprd_rg:rg_move_complete[499] export GROUPNAME
+epprd_rg:rg_move_complete[501] cl_rrmethods2call postrg_move
+epprd_rg:cl_rrmethods2call[56] version=%I%
+epprd_rg:cl_rrmethods2call[84] RRMETHODS=''
+epprd_rg:cl_rrmethods2call[85] NEED_RR_ENV_VARS=no
+epprd_rg:cl_rrmethods2call[124] NEED_RR_ENV_VARS=yes
+epprd_rg:cl_rrmethods2call[129] : Set the '*_REP_RESOURCE' variables if needed.
+epprd_rg:cl_rrmethods2call[131] [[ yes == yes ]]
+epprd_rg:cl_rrmethods2call[133] cllsres
+epprd_rg:cl_rrmethods2call[133] 2> /dev/null
+epprd_rg:cl_rrmethods2call[133] eval APPLICATIONS='"epprd_app"' EXPORT_FILESYSTEM='"/board_org"' FILESYSTEM='""' FORCED_VARYON='"false"' FSCHECK_TOOL='"fsck"' FS_BEFORE_IPADDR='"false"' MOUNT_FILESYSTEM='"/board;/board_org"' RECOVERY_METHOD='"sequential"' SERVICE_LABEL='"epprd"' SSA_DISK_FENCING='"false"' VG_AUTO_IMPORT='"false"' VOLUME_GROUP='"datavg"' USERDEFINED_RESOURCES='""'
+epprd_rg:cl_rrmethods2call[1] APPLICATIONS=epprd_app
+epprd_rg:cl_rrmethods2call[1] EXPORT_FILESYSTEM=/board_org
+epprd_rg:cl_rrmethods2call[1] FILESYSTEM=''
+epprd_rg:cl_rrmethods2call[1] FORCED_VARYON=false
+epprd_rg:cl_rrmethods2call[1] FSCHECK_TOOL=fsck
+epprd_rg:cl_rrmethods2call[1] FS_BEFORE_IPADDR=false
+epprd_rg:cl_rrmethods2call[1] MOUNT_FILESYSTEM='/board;/board_org'
+epprd_rg:cl_rrmethods2call[1] RECOVERY_METHOD=sequential
+epprd_rg:cl_rrmethods2call[1] SERVICE_LABEL=epprd
+epprd_rg:cl_rrmethods2call[1] SSA_DISK_FENCING=false
+epprd_rg:cl_rrmethods2call[1] VG_AUTO_IMPORT=false
+epprd_rg:cl_rrmethods2call[1] VOLUME_GROUP=datavg
+epprd_rg:cl_rrmethods2call[1] USERDEFINED_RESOURCES=''
+epprd_rg:cl_rrmethods2call[137] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[142] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[147] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[152] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[157] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[162] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[167] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[172] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[182] [[ -z '' ]]
+epprd_rg:cl_rrmethods2call[184] typeset sysmgdata
+epprd_rg:cl_rrmethods2call[185] typeset reposmgdata
+epprd_rg:cl_rrmethods2call[186] [[ -x /usr/es/sbin/cluster/xd_generic/xd_cli/clxd_list_mg_smit ]]
+epprd_rg:cl_rrmethods2call[191] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[191] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[197] echo ''
+epprd_rg:cl_rrmethods2call[199] return 0
+epprd_rg:rg_move_complete[501] METHODS=''
+epprd_rg:rg_move_complete[516] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
+epprd_rg:rg_move_complete[518] exit 0
Jan 28 2023 17:11:26 EVENT COMPLETED: rg_move_complete epprda 1 0
|2023-01-28T17:11:26|28698|EVENT COMPLETED: rg_move_complete epprda 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:26.935403
+ echo '|2023-01-28T17:11:26.935403|INFO: rg_move_complete|epprd_rg|epprda|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
PowerHA SystemMirror Event Summary
----------------------------------------------------------------------------
Serial number for this event: 28698
Event: TE_RG_MOVE_ACQUIRE
Start time: Sat Jan 28 17:10:33 2023
End time: Sat Jan 28 17:11:26 2023
Action: Resource: Script Name:
----------------------------------------------------------------------------
Acquiring resource group: epprd_rg process_resources
Search on: Sat.Jan.28.17:10:34.KORST.2023.process_resources.epprd_rg.ref
Acquiring resource: All_service_addrs acquire_service_addr
Search on: Sat.Jan.28.17:10:34.KORST.2023.acquire_service_addr.All_service_addrs.epprd_rg.ref
Resource online: All_nonerror_service_addrs acquire_service_addr
Search on: Sat.Jan.28.17:10:35.KORST.2023.acquire_service_addr.All_nonerror_service_addrs.epprd_rg.ref
Acquiring resource: All_volume_groups cl_activate_vgs
Search on: Sat.Jan.28.17:10:35.KORST.2023.cl_activate_vgs.All_volume_groups.epprd_rg.ref
Resource online: All_nonerror_volume_groups cl_activate_vgs
Search on: Sat.Jan.28.17:10:39.KORST.2023.cl_activate_vgs.All_nonerror_volume_groups.epprd_rg.ref
Acquiring resource: All_filesystems cl_activate_fs
Search on: Sat.Jan.28.17:10:40.KORST.2023.cl_activate_fs.All_filesystems.epprd_rg.ref
Resource online: All_non_error_filesystems cl_activate_fs
Search on: Sat.Jan.28.17:10:44.KORST.2023.cl_activate_fs.All_non_error_filesystems.epprd_rg.ref
Acquiring resource: All_exports cl_export_fs
Search on: Sat.Jan.28.17:10:54.KORST.2023.cl_export_fs.All_exports.epprd_rg.ref
Resource online: All_nonerror_exports cl_export_fs
Search on: Sat.Jan.28.17:10:54.KORST.2023.cl_export_fs.All_nonerror_exports.epprd_rg.ref
Acquiring resource: All_nfs_mounts cl_activate_nfs
Search on: Sat.Jan.28.17:10:54.KORST.2023.cl_activate_nfs.All_nfs_mounts.epprd_rg.ref
Acquiring resource: All_servers start_server
Search on: Sat.Jan.28.17:11:26.KORST.2023.start_server.All_servers.epprd_rg.ref
Resource online: All_nonerror_servers start_server
Search on: Sat.Jan.28.17:11:26.KORST.2023.start_server.All_nonerror_servers.epprd_rg.ref
Resource group online: epprd_rg process_resources
Search on: Sat.Jan.28.17:11:26.KORST.2023.process_resources.epprd_rg.ref
----------------------------------------------------------------------------
|EVENT_SUMMARY_START|TE_RG_MOVE_ACQUIRE|2023-01-28T17:10:33|2023-01-28T17:11:26|28698|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:34.KORST.2023.process_resources.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:34.KORST.2023.acquire_service_addr.All_service_addrs.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:35.KORST.2023.acquire_service_addr.All_nonerror_service_addrs.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:35.KORST.2023.cl_activate_vgs.All_volume_groups.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:39.KORST.2023.cl_activate_vgs.All_nonerror_volume_groups.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:40.KORST.2023.cl_activate_fs.All_filesystems.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:44.KORST.2023.cl_activate_fs.All_non_error_filesystems.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:54.KORST.2023.cl_export_fs.All_exports.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:54.KORST.2023.cl_export_fs.All_nonerror_exports.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:10:54.KORST.2023.cl_activate_nfs.All_nfs_mounts.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:11:26.KORST.2023.start_server.All_servers.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:11:26.KORST.2023.start_server.All_nonerror_servers.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.17:11:26.KORST.2023.process_resources.epprd_rg.ref.ref|
|EVENT_SUMMARY_END|
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 28699
No resource state change initiated by the cluster manager as a result of this event
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_JOIN_NODE_DEP_COMPLETE|2023-01-28T17:11:28|28699|
|EVENT_NO_ACTIONS_QUEUED|
|EVENT_PREAMBLE_END|
Jan 28 2023 17:11:29 EVENT START: node_up_complete epprda
|2023-01-28T17:11:29|28699|EVENT START: node_up_complete epprda|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:29.142355
+ echo '|2023-01-28T17:11:29.142355|INFO: node_up_complete|epprda'
+ 1>> /var/hacmp/availability/clavailability.log
+ version=%I%
+ set -a
+ cllsparam -n epprda
+ eval NODE_NAME=epprda VERBOSE_LOGGING=high PS4=$'\'${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}${KSH_VERSION:+${.sh.fun:+${.sh.fun}:}}$LINENO]' $'\'' DEBUG_LEVEL=Standard LC_ALL=$'\'C\''
+ NODE_NAME=epprda
+ VERBOSE_LOGGING=high
:node_up_complete[1] PS4='${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}${KSH_VERSION:+${.sh.fun:+${.sh.fun}:}}$LINENO] '
:node_up_complete[1] DEBUG_LEVEL=Standard
:node_up_complete[1] LC_ALL=C
:node_up_complete[80] set +a
:node_up_complete[82] NODENAME=epprda
:node_up_complete[83] RC=0
:node_up_complete[83] typeset -i RC
:node_up_complete[84] UPDATESTATD=0
:node_up_complete[84] typeset -i UPDATESTATD
:node_up_complete[86] LPM_IN_PROGRESS_DIR=/var/hacmp/.lpm_in_progress
:node_up_complete[86] typeset LPM_IN_PROGRESS_DIR
:node_up_complete[87] LPM_IN_PROGRESS_PREFIX=lpm
:node_up_complete[87] typeset LPM_IN_PROGRESS_PREFIX
:node_up_complete[88] STATE_FILE=/var/hacmp/cl_dr.state
:node_up_complete[88] typeset STATE_FILE
:node_up_complete[97] STATUS=0
:node_up_complete[99] set -u
:node_up_complete[101] (( 1 < 1 ))
:node_up_complete[107] START_MODE=''
:node_up_complete[107] typeset START_MODE
:node_up_complete[108] (( 1 > 1 ))
:node_up_complete[114] : serial number for this event is 28699
:node_up_complete[118] RPCLOCKDSTOPPED=0
:node_up_complete[118] typeset -i RPCLOCKDSTOPPED
:node_up_complete[119] [[ -f /tmp/.RPCLOCKDSTOPPED ]]
:node_up_complete[127] clnodename
:node_up_complete[127] wc -l
:node_up_complete[127] (( 2 == 2 ))
:node_up_complete[129] clodmget -f group -n HACMPgroup
:node_up_complete[129] RESOURCE_GROUPS=epprd_rg
:node_up_complete[132] clodmget -q group='epprd_rg AND name=EXPORT_FILESYSTEM' -f value -n HACMPresource
:node_up_complete[132] EXPORTLIST=/board_org
:node_up_complete[133] [[ -n /board_org ]]
:node_up_complete[135] UPDATESTATD=1
:node_up_complete[136] [[ epprda == epprda ]]
:node_up_complete[139] lssrc -s rpc.statd
:node_up_complete[139] LC_ALL=C
:node_up_complete[139] grep inoperative
:node_up_complete[140] (( 1 == 0 ))
:node_up_complete[146] cl_update_statd
:cl_update_statd(0)[+174] version=%I%
:cl_update_statd(0)[+176] typeset -i RC=0
:cl_update_statd(0)[+178] LOCAL_FOUND=
:cl_update_statd(0)[+179] TWIN_NAME=
:cl_update_statd(0)[+180] [[ -z epprda ]]
:cl_update_statd(0)[+181] :cl_update_statd(0)[+181] cl_get_path -S
OP_SEP=~
:cl_update_statd(0)[+182] set -u
:cl_update_statd(0)[+187] LOCAL_FOUND=true
:cl_update_statd(0)[+194] : Make sure statd is running locally
:cl_update_statd(0)[+196] lssrc -s statd
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw inoperative
:cl_update_statd(0)[+196] rpcinfo -p
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw status
:cl_update_statd(0)[+207] : Get the current twin, if there is one
:cl_update_statd(0)[+209] :cl_update_statd(0)[+209] nfso -H sm_gethost
:cl_update_statd(0)[+209] 2>& 1
CURTWIN=
:cl_update_statd(0)[+210] RC=0
:cl_update_statd(0)[+212] [[ -z true ]]
:cl_update_statd(0)[+212] [[ -z ]]
:cl_update_statd(0)[+215] : Local node is no longer a cluster member, unregister its twin
:cl_update_statd(0)[+215] [[ -n ]]
:cl_update_statd(0)[+259] : RC is actually 0
:cl_update_statd(0)[+266] return 0
:node_up_complete[147] (( 0 ))
:node_up_complete[151] break
:node_up_complete[156] (( 1 ))
:node_up_complete[158] (( 0 ))
:node_up_complete[198] [[ TRUE == FALSE ]]
:node_up_complete[268] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
:node_up_complete[270] : This is the final clRGinfo output
:node_up_complete[272] clRGinfo -p -t
:node_up_complete[272] 2>& 1
clRGinfo[431]: version I
clRGinfo[517]: Number of resource groups = 0
clRGinfo[562]: cluster epprda_cluster is version = 22
clRGinfo[597]: no resource groups specified on command line - print all
clRGinfo[685]: Current group is 'epprd_rg'
get primary state info for state 6
get secondary state info for state 6
getPrimaryStateStr: using primary_table => primary_state_table
get primary state info for state 4
get secondary state info for state 4
getPrimaryStateStr: using primary_table => primary_state_table
Cluster Name: epprda_cluster
Resource Group Name: epprd_rg
Node Group State Delayed
Timers
---------------------------------------------------------------- --------------- -------------------
epprda ONLINE
epprds OFFLINE
:node_up_complete[277] (( 0 == 0 ))
:node_up_complete[279] [[ epprda != epprda ]]
:node_up_complete[300] exit 0
Jan 28 2023 17:11:29 EVENT COMPLETED: node_up_complete epprda 0
|2023-01-28T17:11:29|28699|EVENT COMPLETED: node_up_complete epprda 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:29.342396
+ echo '|2023-01-28T17:11:29.342396|INFO: node_up_complete|epprda|0'
+ 1>> /var/hacmp/availability/clavailability.log
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 22166
Cluster services started on node 'epprds'
Node Up Completion Event has been enqueued.
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_JOIN_NODE_DEP|2023-01-28T17:11:33|22166|
|NODE_UP_COMPLETE|
|EVENT_PREAMBLE_END|
Jan 28 2023 17:11:35 EVENT START: node_up epprds
|2023-01-28T17:11:35|22166|EVENT START: node_up epprds|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:36.014717
+ echo '|2023-01-28T17:11:36.014717|INFO: node_up|epprds'
+ 1>> /var/hacmp/availability/clavailability.log
:node_up[182] version=%I%
:node_up[185] NODENAME=epprds
:node_up[185] export NODENAME
:node_up[193] STATUS=0
:node_up[193] typeset -li STATUS
:node_up[194] RC=0
:node_up[194] typeset -li RC
:node_up[195] ENABLE_NFS_CROSS_MOUNT=false
:node_up[196] START_MODE=''
:node_up[196] typeset START_MODE
:node_up[198] set -u
:node_up[200] (( 1 < 1 ))
:node_up[200] (( 1 > 2 ))
:node_up[207] : serial number for this event is 22166
:node_up[210] [[ epprda == epprds ]]
:node_up[219] (( 1 > 1 ))
:node_up[256] : If RG_DEPENDENCIES=false, process RGs with clsetenvgrp
:node_up[258] [[ TRUE == FALSE ]]
:node_up[281] : localnode processing prior to RG acquisition
:node_up[283] [[ epprda == epprds ]]
:node_up[498] : Enable NFS crossmounts during manual start
:node_up[500] [[ -n false ]]
:node_up[500] [[ false == true ]]
:node_up[607] : When RG dependencies are not configured we call node_up_local/remote,
:node_up[608] : followed by process_resources to process any remaining groups
:node_up[610] [[ TRUE == FALSE ]]
:node_up[657] [[ epprda == epprds ]]
:node_up[667] return 0
Jan 28 2023 17:11:36 EVENT COMPLETED: node_up epprds 0
|2023-01-28T17:11:36|22166|EVENT COMPLETED: node_up epprds 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:36.138887
+ echo '|2023-01-28T17:11:36.138887|INFO: node_up|epprds|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 17:11:39 EVENT START: rg_move_fence epprds 1
|2023-01-28T17:11:39|22167|EVENT START: rg_move_fence epprds 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:39.583928
+ echo '|2023-01-28T17:11:39.583928|INFO: rg_move_fence|epprd_rg|epprds|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_fence[62] [[ high == high ]]
:rg_move_fence[62] version=1.11
:rg_move_fence[63] NODENAME=epprds
:rg_move_fence[63] export NODENAME
:rg_move_fence[65] set -u
:rg_move_fence[67] [ 2 != 2 ]
:rg_move_fence[73] set +u
:rg_move_fence[75] [[ -z TRUE ]]
:rg_move_fence[80] [[ TRUE == TRUE ]]
:rg_move_fence[82] LOCAL_NODENAME=epprda
:rg_move_fence[83] odmget -qid=1 HACMPgroup
:rg_move_fence[83] egrep 'group ='
:rg_move_fence[83] awk '{print $3}'
:rg_move_fence[83] eval RGNAME='"epprd_rg"'
:rg_move_fence[1] RGNAME=epprd_rg
+epprd_rg:rg_move_fence[84] GROUPNAME=epprd_rg
+epprd_rg:rg_move_fence[85] group_state='$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[86] set +u
+epprd_rg:rg_move_fence[87] eval print '$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[1] print ONLINE
+epprd_rg:rg_move_fence[87] RG_MOVE_ONLINE=ONLINE
+epprd_rg:rg_move_fence[87] export RG_MOVE_ONLINE
+epprd_rg:rg_move_fence[88] set -u
+epprd_rg:rg_move_fence[89] RG_MOVE_ONLINE=ONLINE
+epprd_rg:rg_move_fence[91] set -a
+epprd_rg:rg_move_fence[92] clsetenvgrp epprda rg_move epprd_rg ''
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
+epprd_rg:rg_move_fence[92] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[93] RC=0
+epprd_rg:rg_move_fence[94] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[1] FORCEDOWN_GROUPS=''
+epprd_rg:rg_move_fence[2] RESOURCE_GROUPS=''
+epprd_rg:rg_move_fence[3] HOMELESS_GROUPS=''
+epprd_rg:rg_move_fence[4] HOMELESS_FOLLOWER_GROUPS=''
+epprd_rg:rg_move_fence[5] ERRSTATE_GROUPS=''
+epprd_rg:rg_move_fence[6] PRINCIPAL_ACTIONS=''
+epprd_rg:rg_move_fence[7] ASSOCIATE_ACTIONS=''
+epprd_rg:rg_move_fence[8] AUXILLIARY_ACTIONS=''
+epprd_rg:rg_move_fence[8] SIBLING_GROUPS=''
+epprd_rg:rg_move_fence[9] SIBLING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[10] SIBLING_ACQUIRING_GROUPS=''
+epprd_rg:rg_move_fence[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[12] SIBLING_RELEASING_GROUPS=''
+epprd_rg:rg_move_fence[13] SIBLING_RELEASING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[95] set +a
+epprd_rg:rg_move_fence[96] [ 0 -ne 0 ]
+epprd_rg:rg_move_fence[103] process_resources FENCE
:rg_move_fence[3318] version=1.169
:rg_move_fence[3321] STATUS=0
:rg_move_fence[3322] sddsrv_off=FALSE
:rg_move_fence[3324] true
:rg_move_fence[3326] : call rgpa, and it will tell us what to do next
:rg_move_fence[3328] set -a
:rg_move_fence[3329] clRGPA FENCE
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa FENCE
2023-01-28T17:11:39.687207 clrgpa
:clRGPA[+55] exit 0
:rg_move_fence[3329] eval JOB_TYPE=NONE
:rg_move_fence[1] JOB_TYPE=NONE
:rg_move_fence[3330] RC=0
:rg_move_fence[3331] set +a
:rg_move_fence[3333] (( 0 != 0 ))
:rg_move_fence[3342] RESOURCE_GROUPS=''
:rg_move_fence[3343] GROUPNAME=''
:rg_move_fence[3343] export GROUPNAME
:rg_move_fence[3353] IS_SERVICE_START=1
:rg_move_fence[3354] IS_SERVICE_STOP=1
:rg_move_fence[3360] [[ NONE == RELEASE ]]
:rg_move_fence[3360] [[ NONE == ONLINE ]]
:rg_move_fence[3729] break
:rg_move_fence[3740] : If sddsrv was turned off above, turn it back on again
:rg_move_fence[3742] [[ FALSE == TRUE ]]
:rg_move_fence[3747] exit 0
+epprd_rg:rg_move_fence[104] : exit status of process_resources FENCE is: 0
+epprd_rg:rg_move_fence[107] [[ TRUE == TRUE ]]
+epprd_rg:rg_move_fence[109] export EVENT_TYPE
+epprd_rg:rg_move_fence[110] echo ACQUIRE_PRIMARY_NFS
ACQUIRE_PRIMARY_NFS
+epprd_rg:rg_move_fence[111] [[ -n '' ]]
+epprd_rg:rg_move_fence[141] exit 0
Jan 28 2023 17:11:39 EVENT COMPLETED: rg_move_fence epprds 1 0
|2023-01-28T17:11:39|22167|EVENT COMPLETED: rg_move_fence epprds 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:39.779807
+ echo '|2023-01-28T17:11:39.779807|INFO: rg_move_fence|epprd_rg|epprds|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 17:11:39 EVENT START: rg_move_acquire epprds 1
|2023-01-28T17:11:39|22167|EVENT START: rg_move_acquire epprds 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:39.983945
+ echo '|2023-01-28T17:11:39.983945|INFO: rg_move_acquire|epprd_rg|epprds|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_acquire[+54] [[ high == high ]]
:rg_move_acquire[+54] version=1.9.1.7
:rg_move_acquire[+57] set -u
:rg_move_acquire[+59] [ 2 != 2 ]
:rg_move_acquire[+65] set +u
:rg_move_acquire[+67] :rg_move_acquire[+67] clodmget -n -q id=1 -f group HACMPgroup
RG=epprd_rg
:rg_move_acquire[+68] export RG
:rg_move_acquire[+70] [[ ACQUIRE_PRIMARY_NFS == ACQUIRE_PRIMARY ]]
:rg_move_acquire[+118] clcallev rg_move epprds 1 ACQUIRE
Jan 28 2023 17:11:40 EVENT START: rg_move epprds 1 ACQUIRE
|2023-01-28T17:11:40|22167|EVENT START: rg_move epprds 1 ACQUIRE|
:clevlog[amlog_trace:318] clcycle clavailability.log
:clevlog[amlog_trace:318] 1> /dev/null 2>& 1
:clevlog[amlog_trace:319] cltime
:clevlog[amlog_trace:319] DATE=2023-01-28T17:11:40.110533
:clevlog[amlog_trace:320] echo '|2023-01-28T17:11:40.110533|INFO: rg_move|epprd_rg|epprds|1|ACQUIRE'
:clevlog[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
:get_local_nodename[56] nodename=epprda
:get_local_nodename[57] rc=0
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 0 != 0 ))
:get_local_nodename[61] : If the node name in HACMPcluster matches a configured node, we are done.
:get_local_nodename[63] clnodename
:get_local_nodename[63] grep -w epprda
:get_local_nodename[63] [[ -n epprda ]]
:get_local_nodename[65] print -- epprda
:get_local_nodename[66] exit 0
:rg_move[76] version=%I%
:rg_move[86] STATUS=0
:rg_move[88] [[ ! -n '' ]]
:rg_move[90] EMULATE=REAL
:rg_move[96] set -u
:rg_move[98] NODENAME=epprds
:rg_move[98] export NODENAME
:rg_move[99] RGID=1
:rg_move[100] (( 3 == 3 ))
:rg_move[102] ACTION=ACQUIRE
:rg_move[108] : serial number for this event is 22167
:rg_move[112] RG_UP_POSTEVENT_ON_NODE=epprds
:rg_move[112] export RG_UP_POSTEVENT_ON_NODE
:rg_move[116] clodmget -qid=1 -f group -n HACMPgroup
:rg_move[116] eval RGNAME=epprd_rg
:rg_move[1] RGNAME=epprd_rg
:rg_move[118] UPDATESTATD=0
:rg_move[119] export UPDATESTATD
:rg_move[123] RG_MOVE_EVENT=true
:rg_move[123] export RG_MOVE_EVENT
:rg_move[128] group_state='$RESGRP_epprd_rg_epprda'
:rg_move[129] set +u
:rg_move[130] eval print '$RESGRP_epprd_rg_epprda'
:rg_move[1] print ONLINE
:rg_move[130] RG_MOVE_ONLINE=ONLINE
:rg_move[130] export RG_MOVE_ONLINE
:rg_move[131] set -u
:rg_move[132] RG_MOVE_ONLINE=ONLINE
:rg_move[139] rm -f /tmp/.NFSSTOPPED
:rg_move[140] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[147] set -a
:rg_move[148] clsetenvgrp epprds rg_move epprd_rg
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprds rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
:rg_move[148] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[149] RC=0
:rg_move[150] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[1] FORCEDOWN_GROUPS=''
:rg_move[2] RESOURCE_GROUPS=''
:rg_move[3] HOMELESS_GROUPS=''
:rg_move[4] HOMELESS_FOLLOWER_GROUPS=''
:rg_move[5] ERRSTATE_GROUPS=''
:rg_move[6] PRINCIPAL_ACTIONS=''
:rg_move[7] ASSOCIATE_ACTIONS=''
:rg_move[8] AUXILLIARY_ACTIONS=''
:rg_move[8] SIBLING_GROUPS=''
:rg_move[9] SIBLING_NODES_BY_GROUP=''
:rg_move[10] SIBLING_ACQUIRING_GROUPS=''
:rg_move[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
:rg_move[12] SIBLING_RELEASING_GROUPS=''
:rg_move[13] SIBLING_RELEASING_NODES_BY_GROUP=''
:rg_move[151] set +a
:rg_move[155] (( 0 != 0 ))
:rg_move[155] [[ -z epprd_rg ]]
:rg_move[164] [[ -z TRUE ]]
:rg_move[241] AM_SYNC_CALLED_BY=RG_MOVE
:rg_move[241] export AM_SYNC_CALLED_BY
:rg_move[242] process_resources
:process_resources[3318] version=1.169
:process_resources[3321] STATUS=0
:process_resources[3322] sddsrv_off=FALSE
:process_resources[3324] true
:process_resources[3326] : call rgpa, and it will tell us what to do next
:process_resources[3328] set -a
:process_resources[3329] clRGPA
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa
2023-01-28T17:11:40.231967 clrgpa
:clRGPA[+55] exit 0
:process_resources[3329] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[3330] RC=0
:process_resources[3331] set +a
:process_resources[3333] (( 0 != 0 ))
:process_resources[3342] RESOURCE_GROUPS=''
:process_resources[3343] GROUPNAME=''
:process_resources[3343] export GROUPNAME
:process_resources[3353] IS_SERVICE_START=1
:process_resources[3354] IS_SERVICE_STOP=1
:process_resources[3360] [[ NONE == RELEASE ]]
:process_resources[3360] [[ NONE == ONLINE ]]
:process_resources[3729] break
:process_resources[3740] : If sddsrv was turned off above, turn it back on again
:process_resources[3742] [[ FALSE == TRUE ]]
:process_resources[3747] exit 0
:rg_move[247] : unsetting AM_SYNC_CALLED_BY from $'callers environment as\n: we dont' require it after this point in execution.
:rg_move[250] unset AM_SYNC_CALLED_BY
:rg_move[253] [[ -f /tmp/.NFSSTOPPED ]]
:rg_move[274] [[ -f /tmp/.RPCLOCKDSTOPPED ]]
:rg_move[293] exit 0
Jan 28 2023 17:11:40 EVENT COMPLETED: rg_move epprds 1 ACQUIRE 0
|2023-01-28T17:11:40|22167|EVENT COMPLETED: rg_move epprds 1 ACQUIRE 0|
:clevlog[amlog_trace:318] clcycle clavailability.log
:clevlog[amlog_trace:318] 1> /dev/null 2>& 1
:clevlog[amlog_trace:319] cltime
:clevlog[amlog_trace:319] DATE=2023-01-28T17:11:40.363068
:clevlog[amlog_trace:320] echo '|2023-01-28T17:11:40.363068|INFO: rg_move|epprd_rg|epprds|1|ACQUIRE|0'
:clevlog[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
:rg_move_acquire[+119] exit_status=0
:rg_move_acquire[+120] : exit status of clcallev rg_move epprds 1 ACQUIRE is: 0
:rg_move_acquire[+121] exit 0
Jan 28 2023 17:11:40 EVENT COMPLETED: rg_move_acquire epprds 1 0
|2023-01-28T17:11:40|22167|EVENT COMPLETED: rg_move_acquire epprds 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:40.483272
+ echo '|2023-01-28T17:11:40.483272|INFO: rg_move_acquire|epprd_rg|epprds|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 17:11:40 EVENT START: rg_move_complete epprds 1
|2023-01-28T17:11:40|22167|EVENT START: rg_move_complete epprds 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:40.664809
+ echo '|2023-01-28T17:11:40.664809|INFO: rg_move_complete|epprd_rg|epprds|1'
+ 1>> /var/hacmp/availability/clavailability.log
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
:get_local_nodename[56] nodename=epprda
:get_local_nodename[57] rc=0
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 0 != 0 ))
:get_local_nodename[61] : If the node name in HACMPcluster matches a configured node, we are done.
:get_local_nodename[63] grep -w epprda
:get_local_nodename[63] clnodename
:get_local_nodename[63] [[ -n epprda ]]
:get_local_nodename[65] print -- epprda
:get_local_nodename[66] exit 0
:rg_move_complete[91] version=%I%
:rg_move_complete[97] STATUS=0
:rg_move_complete[97] typeset -li STATUS
:rg_move_complete[99] [[ -z '' ]]
:rg_move_complete[101] EMULATE=REAL
:rg_move_complete[104] set -u
:rg_move_complete[106] (( 2 < 2 || 2 > 3 ))
:rg_move_complete[112] NODENAME=epprds
:rg_move_complete[112] export NODENAME
:rg_move_complete[113] RGID=1
:rg_move_complete[114] (( 2 == 3 ))
:rg_move_complete[118] RGDESTINATION=''
:rg_move_complete[122] : serial number for this event is 22167
:rg_move_complete[126] : Interpret resource group ID into a resource group name.
:rg_move_complete[128] clodmget -qid=1 -f group -n HACMPgroup
:rg_move_complete[128] eval RGNAME=epprd_rg
:rg_move_complete[1] RGNAME=epprd_rg
+epprd_rg:rg_move_complete[129] GROUPNAME=epprd_rg
+epprd_rg:rg_move_complete[131] UPDATESTATD=0
+epprd_rg:rg_move_complete[131] typeset -li UPDATESTATD
+epprd_rg:rg_move_complete[132] NFSSTOPPED=0
+epprd_rg:rg_move_complete[132] typeset -li NFSSTOPPED
+epprd_rg:rg_move_complete[133] LIMIT=60
+epprd_rg:rg_move_complete[133] WAIT=1
+epprd_rg:rg_move_complete[133] TRY=0
+epprd_rg:rg_move_complete[133] typeset -li LIMIT WAIT TRY
+epprd_rg:rg_move_complete[136] : If this is a two node cluster and exported filesystems exist, then
+epprd_rg:rg_move_complete[137] : when the cluster topology is stable notify rpc.statd of the changes.
+epprd_rg:rg_move_complete[139] wc -l
+epprd_rg:rg_move_complete[139] clnodename
+epprd_rg:rg_move_complete[139] (( 2 == 2 ))
+epprd_rg:rg_move_complete[141] clodmget -f group -n HACMPgroup
+epprd_rg:rg_move_complete[141] RESOURCE_GROUPS=epprd_rg
+epprd_rg:rg_move_complete[144] clodmget -q group='epprd_rg AND name=EXPORT_FILESYSTEM' -f value -n HACMPresource
+epprd_rg:rg_move_complete[144] EXPORTLIST=/board_org
+epprd_rg:rg_move_complete[146] [[ -n /board_org ]]
+epprd_rg:rg_move_complete[146] [[ epprd_rg == epprd_rg ]]
+epprd_rg:rg_move_complete[148] UPDATESTATD=1
+epprd_rg:rg_move_complete[149] [[ REAL == EMUL ]]
+epprd_rg:rg_move_complete[154] cl_update_statd
:cl_update_statd(0)[+174] version=%I%
:cl_update_statd(0)[+176] typeset -i RC=0
:cl_update_statd(0)[+178] LOCAL_FOUND=
:cl_update_statd(0)[+179] TWIN_NAME=
:cl_update_statd(0)[+180] [[ -z epprda ]]
:cl_update_statd(0)[+181] :cl_update_statd(0)[+181] cl_get_path -S
OP_SEP=~
:cl_update_statd(0)[+182] set -u
:cl_update_statd(0)[+187] LOCAL_FOUND=true
:cl_update_statd(0)[+189] TWIN_NAME=epprds
:cl_update_statd(0)[+194] : Make sure statd is running locally
:cl_update_statd(0)[+196] grep -qw inoperative
:cl_update_statd(0)[+196] lssrc -s statd
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw status
:cl_update_statd(0)[+196] rpcinfo -p
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+207] : Get the current twin, if there is one
:cl_update_statd(0)[+209] :cl_update_statd(0)[+209] nfso -H sm_gethost
:cl_update_statd(0)[+209] 2>& 1
CURTWIN=
:cl_update_statd(0)[+210] RC=0
:cl_update_statd(0)[+212] [[ -z true ]]
:cl_update_statd(0)[+212] [[ -z epprds ]]
:cl_update_statd(0)[+225] : Get the interface to the twin node
:cl_update_statd(0)[+227] :cl_update_statd(0)[+227] get_node_ip epprds
:cl_update_statd(0)[+9] (( 1 != 1 ))
:cl_update_statd(0)[+15] Twin_Name=epprds
:cl_update_statd(0)[+16] NewTwin=
:cl_update_statd(0)[+19] : Get the Interface details for every interface on the twin node
:cl_update_statd(0)[+20] : Reject interfaces on nodes that are not public boot addresses
:cl_update_statd(0)[+21] : because those are the only ones we have state information for
:cl_update_statd(0)[+23] :cl_update_statd(0)[+23] cllsif -J ~ -Sw -i epprda
:cl_update_statd(0)[+23] LC_ALL=C
LOCAL_NETWORK_INFO=epprda~boot~net_ether_01~ether~public~epprda~61.81.244.134~~en0~~255.255.255.0~~~24~AF_INET
epprd~service~net_ether_01~ether~public~epprda~61.81.244.156~~~~255.255.255.0~~ignore~24~AF_INET
:cl_update_statd(0)[+25] read adapt type network net_type attrib node ip_addr skip interface skip netmask skip skip prefix ip_family
:cl_update_statd(0)[+25] IFS=~
:cl_update_statd(0)[+24] cllsif -J ~ -Sw -i epprds
:cl_update_statd(0)[+24] LC_ALL=C
:cl_update_statd(0)[+25] [[ public != public ]]
:cl_update_statd(0)[+25] [[ boot != boot ]]
:cl_update_statd(0)[+33] : Find the state of this candidate
:cl_update_statd(0)[+33] [[ AF_INET == AF_INET ]]
:cl_update_statd(0)[+37] :cl_update_statd(0)[+37] tr ./ xx
:cl_update_statd(0)[+37] print 61.81.244.123
addr=i61x81x244x123_epprds
:cl_update_statd(0)[+43] eval candidate_state=${i61x81x244x123_epprds:-down}
:cl_update_statd(0)[+43] candidate_state=UP
:cl_update_statd(0)[+46] : If state is UP, check to see if this node can talk to it
:cl_update_statd(0)[+46] [[ UP == UP ]]
:cl_update_statd(0)[+50] ping -w 5 -c 1 -q 61.81.244.123
:cl_update_statd(0)[+50] 1> /dev/null
:cl_update_statd(0)[+61] echo epprda~boot~net_ether_01~ether~public~epprda~61.81.244.134~~en0~~255.255.255.0~~~24~AF_INET epprd~service~net_ether_01~ether~public~epprda~61.81.244.156~~~~255.255.255.0~~ignore~24~AF_INET
:cl_update_statd(0)[+62] read lcl_adapt lcl_type lcl_network lcl_net_type lcl_attrib lcl_node lcl_ip_addr skip lcl_interface skip lcl_netmask skip skip lcl_prefix lcl_ip_family
:cl_update_statd(0)[+62] IFS=~
:cl_update_statd(0)[+61] tr \n
:cl_update_statd(0)[+62] [[ net_ether_01 != net_ether_01 ]]
:cl_update_statd(0)[+62] [[ boot != boot ]]
:cl_update_statd(0)[+62] [[ public != public ]]
:cl_update_statd(0)[+62] [[ AF_INET != AF_INET ]]
:cl_update_statd(0)[+62] [[ AF_INET == AF_INET ]]
:cl_update_statd(0)[+71] :cl_update_statd(0)[+71] tr ./ xx
:cl_update_statd(0)[+71] print 61.81.244.134
addr=i61x81x244x134_epprda
:cl_update_statd(0)[+77] eval lcl_candidate_state=${i61x81x244x134_epprda:-down}
:cl_update_statd(0)[+77] lcl_candidate_state=UP
:cl_update_statd(0)[+77] [[ UP == UP ]]
:cl_update_statd(0)[+81] : epprds is on the same network as an interface that is up
:cl_update_statd(0)[+82] : on the local node, and the attributes match.
:cl_update_statd(0)[+84] NewTwin=epprds
:cl_update_statd(0)[+85] break
:cl_update_statd(0)[+85] [[ -n epprds ]]
:cl_update_statd(0)[+91] break
:cl_update_statd(0)[+91] [[ -z epprds ]]
:cl_update_statd(0)[+100] echo epprds
:cl_update_statd(0)[+101] return 0
NEWTWIN=epprds
:cl_update_statd(0)[+227] [[ -z epprds ]]
:cl_update_statd(0)[+227] [[ epprds != ]]
:cl_update_statd(0)[+243] : Need to register a new twin
:cl_update_statd(0)[+243] [[ -n ]]
:cl_update_statd(0)[+251] : Register our new twin, epprds
:cl_update_statd(0)[+253] nfso -H sm_register epprds
:cl_update_statd(0)[+254] RC=0
:cl_update_statd(0)[+259] : RC is actually 0
:cl_update_statd(0)[+266] return 0
+epprd_rg:rg_move_complete[155] (( 0 != 0 ))
+epprd_rg:rg_move_complete[160] break
+epprd_rg:rg_move_complete[166] : Set the RESOURCE_GROUPS environment variable with the names
+epprd_rg:rg_move_complete[167] : of all resource groups participating in this event, and export
+epprd_rg:rg_move_complete[168] : them to all successive scripts.
+epprd_rg:rg_move_complete[170] set -a
+epprd_rg:rg_move_complete[171] clsetenvgrp epprds rg_move_complete epprd_rg
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprds rg_move_complete epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
+epprd_rg:rg_move_complete[171] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_complete[172] RC=0
+epprd_rg:rg_move_complete[173] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_complete[1] FORCEDOWN_GROUPS=''
+epprd_rg:rg_move_complete[2] RESOURCE_GROUPS=''
+epprd_rg:rg_move_complete[3] HOMELESS_GROUPS=''
+epprd_rg:rg_move_complete[4] HOMELESS_FOLLOWER_GROUPS=''
+epprd_rg:rg_move_complete[5] ERRSTATE_GROUPS=''
+epprd_rg:rg_move_complete[6] PRINCIPAL_ACTIONS=''
+epprd_rg:rg_move_complete[7] ASSOCIATE_ACTIONS=''
+epprd_rg:rg_move_complete[8] AUXILLIARY_ACTIONS=''
+epprd_rg:rg_move_complete[8] SIBLING_GROUPS=''
+epprd_rg:rg_move_complete[9] SIBLING_NODES_BY_GROUP=''
+epprd_rg:rg_move_complete[10] SIBLING_ACQUIRING_GROUPS=''
+epprd_rg:rg_move_complete[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
+epprd_rg:rg_move_complete[12] SIBLING_RELEASING_GROUPS=''
+epprd_rg:rg_move_complete[13] SIBLING_RELEASING_NODES_BY_GROUP=''
+epprd_rg:rg_move_complete[174] set +a
+epprd_rg:rg_move_complete[175] (( 0 != 0 ))
+epprd_rg:rg_move_complete[182] : For each participating resource group, serially process the resources.
+epprd_rg:rg_move_complete[251] (( 1 == 1 ))
+epprd_rg:rg_move_complete[253] [[ REAL == EMUL ]]
+epprd_rg:rg_move_complete[259] stopsrc -s rpc.lockd
0513-044 The rpc.lockd Subsystem was requested to stop.
+epprd_rg:rg_move_complete[260] rcstopsrc=0
+epprd_rg:rg_move_complete[261] (( 0 != 0 ))
+epprd_rg:rg_move_complete[266] (( TRY=0))
+epprd_rg:rg_move_complete[266] (( 0<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 1<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 2<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 3<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 4<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z stopping ]]
+epprd_rg:rg_move_complete[271] sleep 1
+epprd_rg:rg_move_complete[266] ((TRY++ ))
+epprd_rg:rg_move_complete[266] (( 5<60))
+epprd_rg:rg_move_complete[268] lssrc -s rpc.lockd
+epprd_rg:rg_move_complete[268] LC_ALL=C
+epprd_rg:rg_move_complete[268] tail -1
+epprd_rg:rg_move_complete[268] read name subsystem pid state
+epprd_rg:rg_move_complete[269] [[ ! -z '' ]]
+epprd_rg:rg_move_complete[273] break
+epprd_rg:rg_move_complete[277] [[ ! -z '' ]]
+epprd_rg:rg_move_complete[300] : Sure that rpc.lockd stopped. Restart it.
+epprd_rg:rg_move_complete[302] startsrc -s rpc.lockd
0513-059 The rpc.lockd Subsystem has been started. Subsystem PID is 26214734.
+epprd_rg:rg_move_complete[303] rcstartsrc=0
+epprd_rg:rg_move_complete[304] (( 0 != 0 ))
+epprd_rg:rg_move_complete[365] : If the resource group in this rg_move is now homeless,
+epprd_rg:rg_move_complete[366] : then we need to put it into an error state.
+epprd_rg:rg_move_complete[368] active_node=0
+epprd_rg:rg_move_complete[428] : If the resource group in this rg_move is now homeless_secondary,
+epprd_rg:rg_move_complete[429] : then we need to put it into an errorsecondary state.
+epprd_rg:rg_move_complete[437] : Set an error state for concurrent groups that have
+epprd_rg:rg_move_complete[438] : been brought offline on this node by rg_move.
+epprd_rg:rg_move_complete[453] AM_SYNC_CALLED_BY=RG_MOVE_COMPLETE
+epprd_rg:rg_move_complete[453] export AM_SYNC_CALLED_BY
+epprd_rg:rg_move_complete[454] process_resources
:process_resources[3318] version=1.169
:process_resources[3321] STATUS=0
:process_resources[3322] sddsrv_off=FALSE
:process_resources[3324] true
:process_resources[3326] : call rgpa, and it will tell us what to do next
:process_resources[3328] set -a
:process_resources[3329] clRGPA
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa
2023-01-28T17:11:45.884436 clrgpa
:clRGPA[+55] exit 0
:process_resources[3329] eval JOB_TYPE=NONE
:process_resources[1] JOB_TYPE=NONE
:process_resources[3330] RC=0
:process_resources[3331] set +a
:process_resources[3333] (( 0 != 0 ))
:process_resources[3342] RESOURCE_GROUPS=''
:process_resources[3343] GROUPNAME=''
:process_resources[3343] export GROUPNAME
:process_resources[3353] IS_SERVICE_START=1
:process_resources[3354] IS_SERVICE_STOP=1
:process_resources[3360] [[ NONE == RELEASE ]]
:process_resources[3360] [[ NONE == ONLINE ]]
:process_resources[3729] break
:process_resources[3740] : If sddsrv was turned off above, turn it back on again
:process_resources[3742] [[ FALSE == TRUE ]]
:process_resources[3747] exit 0
+epprd_rg:rg_move_complete[455] STATUS=0
+epprd_rg:rg_move_complete[456] : The exit status of process_resources is: 0
+epprd_rg:rg_move_complete[461] unset AM_SYNC_CALLED_BY
+epprd_rg:rg_move_complete[462] [[ TRUE == TRUE ]]
+epprd_rg:rg_move_complete[491] [[ -z '' ]]
+epprd_rg:rg_move_complete[493] RESOURCE_GROUPS=epprd_rg
+epprd_rg:rg_move_complete[499] GROUPNAME=epprd_rg
+epprd_rg:rg_move_complete[499] export GROUPNAME
+epprd_rg:rg_move_complete[501] cl_rrmethods2call postrg_move
+epprd_rg:cl_rrmethods2call[56] version=%I%
+epprd_rg:cl_rrmethods2call[84] RRMETHODS=''
+epprd_rg:cl_rrmethods2call[85] NEED_RR_ENV_VARS=no
+epprd_rg:cl_rrmethods2call[124] NEED_RR_ENV_VARS=yes
+epprd_rg:cl_rrmethods2call[129] : Set the '*_REP_RESOURCE' variables if needed.
+epprd_rg:cl_rrmethods2call[131] [[ yes == yes ]]
+epprd_rg:cl_rrmethods2call[133] cllsres
+epprd_rg:cl_rrmethods2call[133] 2> /dev/null
+epprd_rg:cl_rrmethods2call[133] eval APPLICATIONS='"epprd_app"' EXPORT_FILESYSTEM='"/board_org"' FILESYSTEM='""' FORCED_VARYON='"false"' FSCHECK_TOOL='"fsck"' FS_BEFORE_IPADDR='"false"' MOUNT_FILESYSTEM='"/board;/board_org"' RECOVERY_METHOD='"sequential"' SERVICE_LABEL='"epprd"' SSA_DISK_FENCING='"false"' VG_AUTO_IMPORT='"false"' VOLUME_GROUP='"datavg"' USERDEFINED_RESOURCES='""'
+epprd_rg:cl_rrmethods2call[1] APPLICATIONS=epprd_app
+epprd_rg:cl_rrmethods2call[1] EXPORT_FILESYSTEM=/board_org
+epprd_rg:cl_rrmethods2call[1] FILESYSTEM=''
+epprd_rg:cl_rrmethods2call[1] FORCED_VARYON=false
+epprd_rg:cl_rrmethods2call[1] FSCHECK_TOOL=fsck
+epprd_rg:cl_rrmethods2call[1] FS_BEFORE_IPADDR=false
+epprd_rg:cl_rrmethods2call[1] MOUNT_FILESYSTEM='/board;/board_org'
+epprd_rg:cl_rrmethods2call[1] RECOVERY_METHOD=sequential
+epprd_rg:cl_rrmethods2call[1] SERVICE_LABEL=epprd
+epprd_rg:cl_rrmethods2call[1] SSA_DISK_FENCING=false
+epprd_rg:cl_rrmethods2call[1] VG_AUTO_IMPORT=false
+epprd_rg:cl_rrmethods2call[1] VOLUME_GROUP=datavg
+epprd_rg:cl_rrmethods2call[1] USERDEFINED_RESOURCES=''
+epprd_rg:cl_rrmethods2call[137] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[142] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[147] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[152] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[157] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[162] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[167] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[172] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[182] [[ -z '' ]]
+epprd_rg:cl_rrmethods2call[184] typeset sysmgdata
+epprd_rg:cl_rrmethods2call[185] typeset reposmgdata
+epprd_rg:cl_rrmethods2call[186] [[ -x /usr/es/sbin/cluster/xd_generic/xd_cli/clxd_list_mg_smit ]]
+epprd_rg:cl_rrmethods2call[191] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[191] [[ -n '' ]]
+epprd_rg:cl_rrmethods2call[197] echo ''
+epprd_rg:cl_rrmethods2call[199] return 0
+epprd_rg:rg_move_complete[501] METHODS=''
+epprd_rg:rg_move_complete[516] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
+epprd_rg:rg_move_complete[518] exit 0
Jan 28 2023 17:11:45 EVENT COMPLETED: rg_move_complete epprds 1 0
|2023-01-28T17:11:45|22167|EVENT COMPLETED: rg_move_complete epprds 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:46.003084
+ echo '|2023-01-28T17:11:46.003084|INFO: rg_move_complete|epprd_rg|epprds|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
PowerHA SystemMirror Event Summary
----------------------------------------------------------------------------
Serial number for this event: 22167
Event: TE_RG_MOVE_ACQUIRE
Start time: Sat Jan 28 17:11:39 2023
End time: Sat Jan 28 17:11:47 2023
Action: Resource: Script Name:
----------------------------------------------------------------------------
No resources changed as a result of this event
----------------------------------------------------------------------------
|EVENT_SUMMARY_START|TE_RG_MOVE_ACQUIRE|2023-01-28T17:11:39|2023-01-28T17:11:47|22167|
|EVENT_NO_ACTION|
|EVENT_SUMMARY_END|
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 22167
No resource state change initiated by the cluster manager as a result of this event
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_JOIN_NODE_DEP_COMPLETE|2023-01-28T17:11:49|22167|
|EVENT_NO_ACTIONS_QUEUED|
|EVENT_PREAMBLE_END|
Jan 28 2023 17:11:49 EVENT START: node_up_complete epprds
|2023-01-28T17:11:49|22167|EVENT START: node_up_complete epprds|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:49.485303
+ echo '|2023-01-28T17:11:49.485303|INFO: node_up_complete|epprds'
+ 1>> /var/hacmp/availability/clavailability.log
+ version=%I%
+ set -a
+ cllsparam -n epprda
+ eval NODE_NAME=epprda VERBOSE_LOGGING=high PS4=$'\'${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}${KSH_VERSION:+${.sh.fun:+${.sh.fun}:}}$LINENO]' $'\'' DEBUG_LEVEL=Standard LC_ALL=$'\'C\''
+ NODE_NAME=epprda
+ VERBOSE_LOGGING=high
:node_up_complete[1] PS4='${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}${KSH_VERSION:+${.sh.fun:+${.sh.fun}:}}$LINENO] '
:node_up_complete[1] DEBUG_LEVEL=Standard
:node_up_complete[1] LC_ALL=C
:node_up_complete[80] set +a
:node_up_complete[82] NODENAME=epprds
:node_up_complete[83] RC=0
:node_up_complete[83] typeset -i RC
:node_up_complete[84] UPDATESTATD=0
:node_up_complete[84] typeset -i UPDATESTATD
:node_up_complete[86] LPM_IN_PROGRESS_DIR=/var/hacmp/.lpm_in_progress
:node_up_complete[86] typeset LPM_IN_PROGRESS_DIR
:node_up_complete[87] LPM_IN_PROGRESS_PREFIX=lpm
:node_up_complete[87] typeset LPM_IN_PROGRESS_PREFIX
:node_up_complete[88] STATE_FILE=/var/hacmp/cl_dr.state
:node_up_complete[88] typeset STATE_FILE
:node_up_complete[97] STATUS=0
:node_up_complete[99] set -u
:node_up_complete[101] (( 1 < 1 ))
:node_up_complete[107] START_MODE=''
:node_up_complete[107] typeset START_MODE
:node_up_complete[108] (( 1 > 1 ))
:node_up_complete[114] : serial number for this event is 22167
:node_up_complete[118] RPCLOCKDSTOPPED=0
:node_up_complete[118] typeset -i RPCLOCKDSTOPPED
:node_up_complete[119] [[ -f /tmp/.RPCLOCKDSTOPPED ]]
:node_up_complete[127] clnodename
:node_up_complete[127] wc -l
:node_up_complete[127] (( 2 == 2 ))
:node_up_complete[129] clodmget -f group -n HACMPgroup
:node_up_complete[129] RESOURCE_GROUPS=epprd_rg
:node_up_complete[132] clodmget -q group='epprd_rg AND name=EXPORT_FILESYSTEM' -f value -n HACMPresource
:node_up_complete[132] EXPORTLIST=/board_org
:node_up_complete[133] [[ -n /board_org ]]
:node_up_complete[135] UPDATESTATD=1
:node_up_complete[136] [[ epprds == epprda ]]
:node_up_complete[146] cl_update_statd
:cl_update_statd(0)[+174] version=%I%
:cl_update_statd(0)[+176] typeset -i RC=0
:cl_update_statd(0)[+178] LOCAL_FOUND=
:cl_update_statd(0)[+179] TWIN_NAME=
:cl_update_statd(0)[+180] [[ -z epprda ]]
:cl_update_statd(0)[+181] :cl_update_statd(0)[+181] cl_get_path -S
OP_SEP=~
:cl_update_statd(0)[+182] set -u
:cl_update_statd(0)[+187] LOCAL_FOUND=true
:cl_update_statd(0)[+189] TWIN_NAME=epprds
:cl_update_statd(0)[+194] : Make sure statd is running locally
:cl_update_statd(0)[+196] lssrc -s statd
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw inoperative
:cl_update_statd(0)[+196] rpcinfo -p
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw status
:cl_update_statd(0)[+207] : Get the current twin, if there is one
:cl_update_statd(0)[+209] :cl_update_statd(0)[+209] nfso -H sm_gethost
:cl_update_statd(0)[+209] 2>& 1
CURTWIN=epprds
:cl_update_statd(0)[+210] RC=0
:cl_update_statd(0)[+212] [[ -z true ]]
:cl_update_statd(0)[+212] [[ -z epprds ]]
:cl_update_statd(0)[+225] : Get the interface to the twin node
:cl_update_statd(0)[+227] :cl_update_statd(0)[+227] get_node_ip epprds
:cl_update_statd(0)[+9] (( 1 != 1 ))
:cl_update_statd(0)[+15] Twin_Name=epprds
:cl_update_statd(0)[+16] NewTwin=
:cl_update_statd(0)[+19] : Get the Interface details for every interface on the twin node
:cl_update_statd(0)[+20] : Reject interfaces on nodes that are not public boot addresses
:cl_update_statd(0)[+21] : because those are the only ones we have state information for
:cl_update_statd(0)[+23] :cl_update_statd(0)[+23] cllsif -J ~ -Sw -i epprda
:cl_update_statd(0)[+23] LC_ALL=C
LOCAL_NETWORK_INFO=epprda~boot~net_ether_01~ether~public~epprda~61.81.244.134~~en0~~255.255.255.0~~~24~AF_INET
epprd~service~net_ether_01~ether~public~epprda~61.81.244.156~~~~255.255.255.0~~ignore~24~AF_INET
:cl_update_statd(0)[+24] cllsif -J ~ -Sw -i epprds
:cl_update_statd(0)[+24] LC_ALL=C
:cl_update_statd(0)[+25] read adapt type network net_type attrib node ip_addr skip interface skip netmask skip skip prefix ip_family
:cl_update_statd(0)[+25] IFS=~
:cl_update_statd(0)[+25] [[ public != public ]]
:cl_update_statd(0)[+25] [[ boot != boot ]]
:cl_update_statd(0)[+33] : Find the state of this candidate
:cl_update_statd(0)[+33] [[ AF_INET == AF_INET ]]
:cl_update_statd(0)[+37] :cl_update_statd(0)[+37] print 61.81.244.123
:cl_update_statd(0)[+37] tr ./ xx
addr=i61x81x244x123_epprds
:cl_update_statd(0)[+43] eval candidate_state=${i61x81x244x123_epprds:-down}
:cl_update_statd(0)[+43] candidate_state=UP
:cl_update_statd(0)[+46] : If state is UP, check to see if this node can talk to it
:cl_update_statd(0)[+46] [[ UP == UP ]]
:cl_update_statd(0)[+50] ping -w 5 -c 1 -q 61.81.244.123
:cl_update_statd(0)[+50] 1> /dev/null
:cl_update_statd(0)[+61] echo epprda~boot~net_ether_01~ether~public~epprda~61.81.244.134~~en0~~255.255.255.0~~~24~AF_INET epprd~service~net_ether_01~ether~public~epprda~61.81.244.156~~~~255.255.255.0~~ignore~24~AF_INET
:cl_update_statd(0)[+61] tr \n
:cl_update_statd(0)[+62] read lcl_adapt lcl_type lcl_network lcl_net_type lcl_attrib lcl_node lcl_ip_addr skip lcl_interface skip lcl_netmask skip skip lcl_prefix lcl_ip_family
:cl_update_statd(0)[+62] IFS=~
:cl_update_statd(0)[+62] [[ net_ether_01 != net_ether_01 ]]
:cl_update_statd(0)[+62] [[ boot != boot ]]
:cl_update_statd(0)[+62] [[ public != public ]]
:cl_update_statd(0)[+62] [[ AF_INET != AF_INET ]]
:cl_update_statd(0)[+62] [[ AF_INET == AF_INET ]]
:cl_update_statd(0)[+71] :cl_update_statd(0)[+71] print 61.81.244.134
:cl_update_statd(0)[+71] tr ./ xx
addr=i61x81x244x134_epprda
:cl_update_statd(0)[+77] eval lcl_candidate_state=${i61x81x244x134_epprda:-down}
:cl_update_statd(0)[+77] lcl_candidate_state=UP
:cl_update_statd(0)[+77] [[ UP == UP ]]
:cl_update_statd(0)[+81] : epprds is on the same network as an interface that is up
:cl_update_statd(0)[+82] : on the local node, and the attributes match.
:cl_update_statd(0)[+84] NewTwin=epprds
:cl_update_statd(0)[+85] break
:cl_update_statd(0)[+85] [[ -n epprds ]]
:cl_update_statd(0)[+91] break
:cl_update_statd(0)[+91] [[ -z epprds ]]
:cl_update_statd(0)[+100] echo epprds
:cl_update_statd(0)[+101] return 0
NEWTWIN=epprds
:cl_update_statd(0)[+227] [[ -z epprds ]]
:cl_update_statd(0)[+227] [[ epprds != epprds ]]
:cl_update_statd(0)[+259] : RC is actually 0
:cl_update_statd(0)[+266] return 0
:node_up_complete[147] (( 0 ))
:node_up_complete[151] break
:node_up_complete[156] (( 1 ))
:node_up_complete[158] (( 0 ))
:node_up_complete[198] [[ TRUE == FALSE ]]
:node_up_complete[268] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
:node_up_complete[270] : This is the final clRGinfo output
:node_up_complete[272] clRGinfo -p -t
:node_up_complete[272] 2>& 1
clRGinfo[431]: version I
clRGinfo[517]: Number of resource groups = 0
clRGinfo[562]: cluster epprda_cluster is version = 22
clRGinfo[597]: no resource groups specified on command line - print all
clRGinfo[685]: Current group is 'epprd_rg'
get primary state info for state 6
get secondary state info for state 6
getPrimaryStateStr: using primary_table => primary_state_table
get primary state info for state 4
get secondary state info for state 4
getPrimaryStateStr: using primary_table => primary_state_table
Cluster Name: epprda_cluster
Resource Group Name: epprd_rg
Node Group State Delayed
Timers
---------------------------------------------------------------- --------------- -------------------
epprda ONLINE
epprds OFFLINE
:node_up_complete[277] (( 0 == 0 ))
:node_up_complete[279] [[ epprds != epprda ]]
:node_up_complete[281] grep -w In_progress_file /var/hacmp/cl_dr.state
:node_up_complete[281] 2> /dev/null
:node_up_complete[281] cut -d= -f2
:node_up_complete[281] lpm_in_progress_file=''
:node_up_complete[282] ls '/var/hacmp/.lpm_in_progress/lpm_*'
:node_up_complete[282] 2> /dev/null
:node_up_complete[282] lpm_in_progress_prefix=''
:node_up_complete[283] [[ -n '' ]]
:node_up_complete[300] exit 0
Jan 28 2023 17:11:49 EVENT COMPLETED: node_up_complete epprds 0
|2023-01-28T17:11:49|22167|EVENT COMPLETED: node_up_complete epprds 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:11:49.707442
+ echo '|2023-01-28T17:11:49.707442|INFO: node_up_complete|epprds|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 17:59:58 EVENT START: admin_op clrm_stop_request 22168 0
|2023-01-28T17:59:58|22168|EVENT START: admin_op clrm_stop_request 22168 0|
:admin_op[110] trap sigint_handler INT
:admin_op[116] OP_TYPE=clrm_stop_request
:admin_op[116] typeset OP_TYPE
:admin_op[117] SERIAL=22168
:admin_op[117] typeset -li SERIAL
:admin_op[118] INVALID=0
:admin_op[118] typeset -li INVALID
The administrator initiated the following action at Sat Jan 28 17:59:58 KORST 2023
Check smit.log and clutils.log for additional details.
Stopping PowerHA cluster services on node: epprda in graceful mode...
Jan 28 2023 17:59:58 EVENT COMPLETED: admin_op clrm_stop_request 22168 0 0
|2023-01-28T17:59:58|22168|EVENT COMPLETED: admin_op clrm_stop_request 22168 0 0|
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 22168
Stop cluster services request with 'Graceful' option received for 'epprda'.
Enqueued rg_move release event for resource group epprd_rg.
Node Down Completion Event has been enqueued.
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_FAIL_NODE_DEP|2023-01-28T17:59:58|22168|
|STOP_CLUSTER_SERVICES|Graceful|epprda|
|CLUSTER_RG_MOVE_RELEASE|epprd_rg|
|NODE_DOWN_COMPLETE|
|EVENT_PREAMBLE_END|
Jan 28 2023 17:59:59 EVENT START: node_down epprda graceful
|2023-01-28T17:59:59|22168|EVENT START: node_down epprda graceful|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:59:59.601396
+ echo '|2023-01-28T17:59:59.601396|INFO: node_down|epprda|graceful'
+ 1>> /var/hacmp/availability/clavailability.log
:node_down[64] version=%I%
:node_down[67] NODENAME=epprda
:node_down[67] export NODENAME
:node_down[68] PARAM=graceful
:node_down[68] export PARAM
:node_down[75] STATUS=0
:node_down[75] typeset -li STATUS
:node_down[77] AIX_SHUTDOWN=false
:node_down[79] set -u
:node_down[81] (( 2 < 1 ))
:node_down[87] : serial number for this event is 22168
:node_down[91] : Clean up NFS state tracking
:node_down[93] UPDATESTATDFILE=/usr/es/sbin/cluster/etc/updatestatd
:node_down[94] rm -f /tmp/.RPCLOCKDSTOPPED
:node_down[95] rm -f /usr/es/sbin/cluster/etc/updatestatd
:node_down[96] UPDATESTATD=0
:node_down[97] export UPDATESTATD
:node_down[100] : For RAS debugging, the result of ps -edf is captured at this time
:node_down[102] : begin ps -edf
:node_down[103] ps -edf
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Nov 16 - 0:01 /etc/init
root 4260170 6095340 0 Nov 16 - 0:00 /usr/sbin/syslogd
root 5046714 1 0 Nov 16 - 0:00 /usr/ccs/bin/shlap64
root 5177846 1 0 Nov 16 - 17:12 /usr/sbin/syncd 60
root 5898680 1 0 Nov 16 - 0:00 /usr/dt/bin/dtlogin -daemon
root 5964246 1 0 Nov 16 - 0:00 /usr/lib/errdemon
root 6029768 6095340 0 16:38:31 - 0:00 /usr/sbin/snmpd
root 6095340 1 0 Nov 16 - 0:00 /usr/sbin/srcmstr
root 6226176 6095340 0 Nov 16 - 0:00 /usr/sbin/inetd
root 6357492 6095340 0 Nov 16 - 0:00 /usr/sbin/portmap
root 6488536 6095340 0 Nov 16 - 0:56 /usr/sbin/xntpd -x
root 6816230 6095340 0 Nov 16 - 0:04 /usr/sbin/hostmibd
root 6881760 6095340 0 Nov 16 - 0:23 sendmail: accepting connections
root 6947294 6095340 0 Nov 16 - 0:04 /usr/sbin/snmpmibd
root 7143710 6095340 0 Nov 16 - 0:22 /usr/sbin/aixmibd
root 7668214 1 0 Nov 16 - 0:11 /usr/sbin/cron
root 7799282 6095340 0 Nov 16 - 1:12 /usr/sbin/aso
daemon 7864678 6095340 0 17:10:55 - 0:00 /usr/sbin/rpc.statd -d 0 -t 50
root 7930136 6095340 0 Nov 16 - 0:00 /usr/sbin/qdaemon
root 8061186 6095340 0 Nov 16 - 0:00 /usr/sbin/biod 6
root 8126748 1 0 Nov 16 - 0:00 /usr/sbin/uprintfd
root 8257896 6095340 0 17:10:51 - 0:00 /usr/sbin/rpc.mountd
root 8520102 6095340 0 Nov 16 - 0:00 /usr/sbin/writesrv
root 8585542 6095340 0 Nov 16 - 0:00 /usr/sbin/sshd
root 8913186 6095340 0 Nov 16 - 0:00 /usr/sbin/pfcdaemon
root 13959478 6095340 0 17:00:52 - 0:00 /opt/rsct/bin/rmcd -a IBM.LPCommands -r -S 1500
root 14025136 6095340 0 Nov 16 - 0:00 /usr/sbin/lldpd
root 14090674 6095340 0 Nov 16 - 0:00 /usr/sbin/ecpvdpd
root 14287294 1 0 Nov 16 - 1:26 /usr/bin/topasrec -L -s 300 -R 1 -r 6 -o /var/perf/daily/ -ypersistent=1 -O type=bin -ystart_time=15:11:38,Nov16,2022
root 14352890 6095340 0 Nov 16 - 0:04 /opt/rsct/bin/IBM.MgmtDomainRMd
root 14614984 1 0 15:01:09 - 0:00 /usr/sbin/getty /dev/console
root 14877148 6095340 0 Nov 16 - 0:00 /var/perf/pm/bin/pmperfrec
root 15008234 6095340 0 Nov 16 - 0:00 /opt/rsct/bin/IBM.HostRMd
root 15073556 6095340 0 Nov 16 - 0:00 /opt/rsct/bin/IBM.ServiceRMd
root 15532528 6095340 0 Nov 16 - 0:00 /opt/rsct/bin/IBM.DRMd
root 18088406 8585542 0 16:32:54 - 0:00 sshd: root@pts/5
root 18612628 8585542 0 17:57:26 - 0:00 sshd: root@pts/6
root 18743778 8585542 0 15:03:01 - 0:00 sshd: root@pts/2
root 20251054 22020420 0 16:48:17 pts/4 0:00 -ksh
root 20447554 8585542 0 16:41:04 - 0:00 sshd: root@pts/7
root 20513024 20447554 0 16:41:07 pts/7 0:00 -ksh
root 20709790 18088406 0 16:32:54 pts/5 0:00 -ksh
root 20972018 6095340 0 17:07:08 - 0:00 /opt/rsct/bin/IBM.ConfigRMd
root 21561614 26411472 0 17:54:30 pts/3 0:00 smitty mknfsexp
root 21823786 18612628 0 17:57:27 pts/6 0:00 -ksh
root 22020420 8585542 0 16:48:14 - 0:00 sshd: root@pts/4
root 22086068 6095340 0 16:40:09 - 0:00 /usr/es/sbin/cluster/clstrmgr
root 22217052 18743778 0 15:03:01 pts/2 0:00 -ksh
root 22610296 6095340 0 17:09:04 - 0:00 /opt/rsct/bin/IBM.StorageRMd
root 22872426 6095340 0 17:09:38 - 0:00 /usr/sbin/clcomd -d -g
root 23003588 1 0 00:00:00 - 0:00 /usr/bin/topas_nmon -f -d -t -s 300 -c 288 -youtput_dir=/ptf/nmon/epprda -ystart_time=00:00:00,Jan28,2023
root 23462322 22086068 0 17:10:27 - 0:00 run_rcovcmd
root 25166118 25297194 0 17:40:42 pts/1 0:00 -ksh
root 25297194 8585542 0 17:40:41 - 0:00 sshd: root@pts/1
root 25362756 23462322 4 17:59:59 - 0:00 /usr/es/sbin/cluster/events/cmd/clcallev node_down epprda graceful
root 26018206 28049702 0 17:59:59 - 0:00 ps -edf
root 26214734 6095340 0 17:11:45 - 0:00 /usr/sbin/rpc.lockd -d 0
root 26411472 27853206 0 17:50:59 pts/3 0:00 -ksh
root 26804518 1 0 0:00
root 26935622 28246396 0 17:10:28 - 0:00 /usr/sbin/gsclvmd -r 30 -i 300 -t 300 -c 00c44af100004b00000001851e9dc053 -v 0
root 27394550 28311894 0 17:32:07 pts/0 0:00 -ksh
root 27853206 8585542 0 17:50:59 - 0:00 sshd: root@pts/3
root 28049702 25362756 0 17:59:59 - 0:00 /bin/ksh93 /usr/es/sbin/cluster/events/node_down epprda graceful
root 28180804 13959478 0 17:00:52 - 0:00 [trspoolm]
root 28246396 6095340 0 17:10:21 - 0:00 /usr/sbin/gsclvmd
root 28311894 8585542 0 17:32:06 - 0:00 sshd: root@pts/0
root 28377402 6095340 0 17:10:46 - 0:00 /usr/sbin/nfsd 3891
root 28770708 6095340 0 17:08:42 - 0:00 /usr/sbin/clconfd
root 28901860 20513024 1 17:59:53 pts/7 0:00 smitty clstop
root 29163932 6095340 0 17:08:43 - 0:00 /usr/sbin/rsct/bin/hagsd cthags
:node_down[104] : end ps -edf
:node_down[107] : If RG_DEPENDENCIES is not false, all RG actions are taken via rg_move events.
:node_down[109] [[ graceful != forced ]]
:node_down[109] [[ TRUE == FALSE ]]
:node_down[207] : Processing specific to the local node
:node_down[209] [[ epprda == epprda ]]
:node_down[212] : Stopping cluster services on epprda with the graceful option
:node_down[214] [[ graceful != forced ]]
:node_down[219] lsvg -L
:node_down[219] lsvg -L -o
:node_down[219] paste -s '-d|' -
:node_down[219] grep -w -v -x -E 'datavg|caavg_private|rootvg'
:node_down[219] INACTIVE_VGS=''
:node_down[222] [[ -n '' ]]
:node_down[272] unset PS4_LOOP
:node_down[276] : update the location DB to indicate this node is going down
:node_down[278] clchdaemons -r -d clstrmgr_scripts -t resource_locator
:node_down[296] [[ -n false ]]
:node_down[296] [[ false == true ]]
:node_down[305] exit 0
Jan 28 2023 17:59:59 EVENT COMPLETED: node_down epprda graceful 0
|2023-01-28T17:59:59|22168|EVENT COMPLETED: node_down epprda graceful 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T17:59:59.748499
+ echo '|2023-01-28T17:59:59.748499|INFO: node_down|epprda|graceful|0'
+ 1>> /var/hacmp/availability/clavailability.log
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 22171
Stop cluster services request with 'Graceful' option received for 'epprds'.
Enqueued rg_move release event for resource group epprd_rg.
Node Down Completion Event has been enqueued.
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_FAIL_NODE_DEP|2023-01-28T18:00:02|22171|
|STOP_CLUSTER_SERVICES|Graceful|epprds|
|CLUSTER_RG_MOVE_RELEASE|epprd_rg|
|NODE_DOWN_COMPLETE|
|EVENT_PREAMBLE_END|
Jan 28 2023 18:00:03 EVENT START: node_down epprds graceful
|2023-01-28T18:00:03|22171|EVENT START: node_down epprds graceful|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:03.454439
+ echo '|2023-01-28T18:00:03.454439|INFO: node_down|epprds|graceful'
+ 1>> /var/hacmp/availability/clavailability.log
:node_down[64] version=%I%
:node_down[67] NODENAME=epprds
:node_down[67] export NODENAME
:node_down[68] PARAM=graceful
:node_down[68] export PARAM
:node_down[75] STATUS=0
:node_down[75] typeset -li STATUS
:node_down[77] AIX_SHUTDOWN=false
:node_down[79] set -u
:node_down[81] (( 2 < 1 ))
:node_down[87] : serial number for this event is 22171
:node_down[91] : Clean up NFS state tracking
:node_down[93] UPDATESTATDFILE=/usr/es/sbin/cluster/etc/updatestatd
:node_down[94] rm -f /tmp/.RPCLOCKDSTOPPED
:node_down[95] rm -f /usr/es/sbin/cluster/etc/updatestatd
:node_down[96] UPDATESTATD=0
:node_down[97] export UPDATESTATD
:node_down[100] : For RAS debugging, the result of ps -edf is captured at this time
:node_down[102] : begin ps -edf
:node_down[103] ps -edf
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 Nov 16 - 0:01 /etc/init
root 4260170 6095340 0 Nov 16 - 0:00 /usr/sbin/syslogd
root 5046714 1 0 Nov 16 - 0:00 /usr/ccs/bin/shlap64
root 5177846 1 0 Nov 16 - 17:12 /usr/sbin/syncd 60
root 5898680 1 0 Nov 16 - 0:00 /usr/dt/bin/dtlogin -daemon
root 5964246 1 0 Nov 16 - 0:00 /usr/lib/errdemon
root 6029768 6095340 0 16:38:31 - 0:00 /usr/sbin/snmpd
root 6095340 1 0 Nov 16 - 0:00 /usr/sbin/srcmstr
root 6226176 6095340 0 Nov 16 - 0:00 /usr/sbin/inetd
root 6357492 6095340 0 Nov 16 - 0:00 /usr/sbin/portmap
root 6488536 6095340 0 Nov 16 - 0:56 /usr/sbin/xntpd -x
root 6816230 6095340 0 Nov 16 - 0:04 /usr/sbin/hostmibd
root 6881760 6095340 0 Nov 16 - 0:23 sendmail: accepting connections
root 6947294 6095340 0 Nov 16 - 0:04 /usr/sbin/snmpmibd
root 7143710 6095340 0 Nov 16 - 0:22 /usr/sbin/aixmibd
root 7668214 1 0 Nov 16 - 0:11 /usr/sbin/cron
root 7799282 6095340 0 Nov 16 - 1:12 /usr/sbin/aso
daemon 7864678 6095340 0 17:10:55 - 0:00 /usr/sbin/rpc.statd -d 0 -t 50
root 7930136 6095340 0 Nov 16 - 0:00 /usr/sbin/qdaemon
root 8061186 6095340 0 Nov 16 - 0:00 /usr/sbin/biod 6
root 8126748 1 0 Nov 16 - 0:00 /usr/sbin/uprintfd
root 8257896 6095340 0 17:10:51 - 0:00 /usr/sbin/rpc.mountd
root 8520102 6095340 0 Nov 16 - 0:00 /usr/sbin/writesrv
root 8585542 6095340 0 Nov 16 - 0:00 /usr/sbin/sshd
root 8913186 6095340 0 Nov 16 - 0:00 /usr/sbin/pfcdaemon
root 13959478 6095340 0 17:00:52 - 0:00 /opt/rsct/bin/rmcd -a IBM.LPCommands -r -S 1500
root 14025136 6095340 0 Nov 16 - 0:00 /usr/sbin/lldpd
root 14090674 6095340 0 Nov 16 - 0:00 /usr/sbin/ecpvdpd
root 14287294 1 0 Nov 16 - 1:26 /usr/bin/topasrec -L -s 300 -R 1 -r 6 -o /var/perf/daily/ -ypersistent=1 -O type=bin -ystart_time=15:11:38,Nov16,2022
root 14352890 6095340 0 Nov 16 - 0:04 /opt/rsct/bin/IBM.MgmtDomainRMd
root 14614984 1 0 15:01:09 - 0:00 /usr/sbin/getty /dev/console
root 14877148 6095340 0 Nov 16 - 0:00 /var/perf/pm/bin/pmperfrec
root 15008234 6095340 0 Nov 16 - 0:00 /opt/rsct/bin/IBM.HostRMd
root 15073556 6095340 0 Nov 16 - 0:00 /opt/rsct/bin/IBM.ServiceRMd
root 15532528 6095340 0 Nov 16 - 0:00 /opt/rsct/bin/IBM.DRMd
root 18088406 8585542 0 16:32:54 - 0:00 sshd: root@pts/5
root 18612628 8585542 0 17:57:26 - 0:00 sshd: root@pts/6
root 18743778 8585542 0 15:03:01 - 0:00 sshd: root@pts/2
root 20251054 22020420 0 16:48:17 pts/4 0:00 -ksh
root 20447554 8585542 0 16:41:04 - 0:00 sshd: root@pts/7
root 20513024 20447554 0 16:41:07 pts/7 0:00 -ksh
root 20709790 18088406 0 16:32:54 pts/5 0:00 -ksh
root 20972018 6095340 0 17:07:08 - 0:00 /opt/rsct/bin/IBM.ConfigRMd
root 21561614 26411472 0 17:54:30 pts/3 0:00 smitty mknfsexp
root 21823786 18612628 0 17:57:27 pts/6 0:00 -ksh
root 22020420 8585542 0 16:48:14 - 0:00 sshd: root@pts/4
root 22086068 6095340 0 16:40:09 - 0:00 /usr/es/sbin/cluster/clstrmgr
root 22217052 18743778 0 15:03:01 pts/2 0:00 -ksh
root 22610296 6095340 0 17:09:04 - 0:00 /opt/rsct/bin/IBM.StorageRMd
root 22872426 6095340 0 17:09:38 - 0:00 /usr/sbin/clcomd -d -g
root 23003588 1 0 00:00:00 - 0:00 /usr/bin/topas_nmon -f -d -t -s 300 -c 288 -youtput_dir=/ptf/nmon/epprda -ystart_time=00:00:00,Jan28,2023
root 23462322 22086068 0 17:10:27 - 0:00 run_rcovcmd
root 25166118 25297194 0 17:40:42 pts/1 0:00 -ksh
root 25297194 8585542 0 17:40:41 - 0:00 sshd: root@pts/1
root 26214734 6095340 0 17:11:45 - 0:00 /usr/sbin/rpc.lockd -d 0
root 26411472 27853206 0 17:50:59 pts/3 0:00 -ksh
root 26804570 28049712 0 18:00:03 - 0:00 ps -edf
root 26935622 28246396 0 17:10:28 - 0:00 /usr/sbin/gsclvmd -r 30 -i 300 -t 300 -c 00c44af100004b00000001851e9dc053 -v 0
root 27394550 28311894 0 17:32:07 pts/0 0:00 -ksh
root 27853206 8585542 0 17:50:59 - 0:00 sshd: root@pts/3
root 28049712 28901864 0 18:00:03 - 0:00 /bin/ksh93 /usr/es/sbin/cluster/events/node_down epprds graceful
root 28180804 13959478 0 17:00:52 - 0:00 [trspoolm]
root 28246396 6095340 0 17:10:21 - 0:00 /usr/sbin/gsclvmd
root 28311894 8585542 0 17:32:06 - 0:00 sshd: root@pts/0
root 28377402 6095340 0 17:10:46 - 0:00 /usr/sbin/nfsd 3891
root 28770708 6095340 0 17:08:42 - 0:00 /usr/sbin/clconfd
root 28901864 23462322 3 18:00:03 - 0:00 /usr/es/sbin/cluster/events/cmd/clcallev node_down epprds graceful
root 29163932 6095340 0 17:08:43 - 0:00 /usr/sbin/rsct/bin/hagsd cthags
:node_down[104] : end ps -edf
:node_down[107] : If RG_DEPENDENCIES is not false, all RG actions are taken via rg_move events.
:node_down[109] [[ graceful != forced ]]
:node_down[109] [[ TRUE == FALSE ]]
:node_down[207] : Processing specific to the local node
:node_down[209] [[ epprds == epprda ]]
:node_down[284] : epprds, is not the local node, handle fencing for any VGs marked as $'\'CRITICAL\'.'
:node_down[286] cl_fence_vg epprds
:cl_fence_vg[336] version=%I%
:cl_fence_vg[341] : Collect list of disks, for use later
:cl_fence_vg[343] lspv
:cl_fence_vg[343] lspv_out=$'hdisk0 00c44af155592938 rootvg active \nhdisk1 00c44af11e9e1645 caavg_private active \nhdisk2 00c44af11e8a9c69 datavg concurrent \nhdisk3 00c44af11e8a9cd7 datavg concurrent \nhdisk4 00c44af11e8a9d3c datavg concurrent \nhdisk5 00c44af11e8a9c05 datavg concurrent \nhdisk6 00c44af11e8a9e05 datavg concurrent \nhdisk7 00c44af11e8a9d9f datavg concurrent \nhdisk8 00c44af11e8a9e69 datavg concurrent '
:cl_fence_vg[345] [[ -z epprda ]]
:cl_fence_vg[354] : Accept a formal parameter of 'name of node that failed' if none were set
:cl_fence_vg[355] : in the environment
:cl_fence_vg[357] EVENTNODE=epprds
:cl_fence_vg[359] [[ -z epprds ]]
:cl_fence_vg[368] : An explicit volume group list can be passed after the name of
:cl_fence_vg[369] : the node that failed. Pick up any such
:cl_fence_vg[371] shift
:cl_fence_vg[372] vg_list=''
:cl_fence_vg[374] common_groups=''
:cl_fence_vg[375] common_critical_vgs=''
:cl_fence_vg[377] [[ -z '' ]]
:cl_fence_vg[380] : Find all the concurrent resource groups that contain both epprds and epprda
:cl_fence_vg[382] clodmget -q 'startup_pref = OAAN' -f group -n HACMPgroup
:cl_fence_vg[424] : Look at each of the resource groups in turn to determine what CRITICAL
:cl_fence_vg[425] : volume groups the local node epprda share access with epprds
:cl_fence_vg[443] : Process the list of common volume groups,
:node_down[296] [[ -n false ]]
:node_down[296] [[ false == true ]]
:node_down[305] exit 0
Jan 28 2023 18:00:03 EVENT COMPLETED: node_down epprds graceful 0
|2023-01-28T18:00:03|22171|EVENT COMPLETED: node_down epprds graceful 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:03.600203
+ echo '|2023-01-28T18:00:03.600203|INFO: node_down|epprds|graceful|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 18:00:05 EVENT START: rg_move_release epprda 1
|2023-01-28T18:00:05|22169|EVENT START: rg_move_release epprda 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:05.820719
+ echo '|2023-01-28T18:00:05.820719|INFO: rg_move_release|epprd_rg|epprda|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_release[+54] [[ high = high ]]
:rg_move_release[+54] version=1.6
:rg_move_release[+56] set -u
:rg_move_release[+58] [ 2 != 2 ]
:rg_move_release[+64] set +u
:rg_move_release[+66] clcallev rg_move epprda 1 RELEASE
Jan 28 2023 18:00:05 EVENT START: rg_move epprda 1 RELEASE
|2023-01-28T18:00:05|22169|EVENT START: rg_move epprda 1 RELEASE|
:clevlog[amlog_trace:318] clcycle clavailability.log
:clevlog[amlog_trace:318] 1> /dev/null 2>& 1
:clevlog[amlog_trace:319] cltime
:clevlog[amlog_trace:319] DATE=2023-01-28T18:00:05.946742
:clevlog[amlog_trace:320] echo '|2023-01-28T18:00:05.946742|INFO: rg_move|epprd_rg|epprda|1|RELEASE'
:clevlog[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
:get_local_nodename[56] nodename=epprda
:get_local_nodename[57] rc=0
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 0 != 0 ))
:get_local_nodename[61] : If the node name in HACMPcluster matches a configured node, we are done.
:get_local_nodename[63] clnodename
:get_local_nodename[63] grep -w epprda
:get_local_nodename[63] [[ -n epprda ]]
:get_local_nodename[65] print -- epprda
:get_local_nodename[66] exit 0
:rg_move[76] version=%I%
:rg_move[86] STATUS=0
:rg_move[88] [[ ! -n '' ]]
:rg_move[90] EMULATE=REAL
:rg_move[96] set -u
:rg_move[98] NODENAME=epprda
:rg_move[98] export NODENAME
:rg_move[99] RGID=1
:rg_move[100] (( 3 == 3 ))
:rg_move[102] ACTION=RELEASE
:rg_move[108] : serial number for this event is 22169
:rg_move[112] RG_UP_POSTEVENT_ON_NODE=epprda
:rg_move[112] export RG_UP_POSTEVENT_ON_NODE
:rg_move[116] clodmget -qid=1 -f group -n HACMPgroup
:rg_move[116] eval RGNAME=epprd_rg
:rg_move[1] RGNAME=epprd_rg
:rg_move[118] UPDATESTATD=0
:rg_move[119] export UPDATESTATD
:rg_move[123] RG_MOVE_EVENT=true
:rg_move[123] export RG_MOVE_EVENT
:rg_move[128] group_state='$RESGRP_epprd_rg_epprda'
:rg_move[129] set +u
:rg_move[130] eval print '$RESGRP_epprd_rg_epprda'
:rg_move[1] print ONLINE
:rg_move[130] RG_MOVE_ONLINE=ONLINE
:rg_move[130] export RG_MOVE_ONLINE
:rg_move[131] set -u
:rg_move[132] RG_MOVE_ONLINE=ONLINE
:rg_move[139] rm -f /tmp/.NFSSTOPPED
:rg_move[140] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[147] set -a
:rg_move[148] clsetenvgrp epprda rg_move epprd_rg
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
:rg_move[148] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[149] RC=0
:rg_move[150] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[1] FORCEDOWN_GROUPS=''
:rg_move[2] RESOURCE_GROUPS=''
:rg_move[3] HOMELESS_GROUPS=''
:rg_move[4] HOMELESS_FOLLOWER_GROUPS=''
:rg_move[5] ERRSTATE_GROUPS=''
:rg_move[6] PRINCIPAL_ACTIONS=''
:rg_move[7] ASSOCIATE_ACTIONS=''
:rg_move[8] AUXILLIARY_ACTIONS=''
:rg_move[8] SIBLING_GROUPS=''
:rg_move[9] SIBLING_NODES_BY_GROUP=''
:rg_move[10] SIBLING_ACQUIRING_GROUPS=''
:rg_move[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
:rg_move[12] SIBLING_RELEASING_GROUPS=''
:rg_move[13] SIBLING_RELEASING_NODES_BY_GROUP=''
:rg_move[151] set +a
:rg_move[155] (( 0 != 0 ))
:rg_move[155] [[ -z epprd_rg ]]
:rg_move[164] [[ -z TRUE ]]
:rg_move[241] AM_SYNC_CALLED_BY=RG_MOVE
:rg_move[241] export AM_SYNC_CALLED_BY
:rg_move[242] process_resources
:process_resources[3318] version=1.169
:process_resources[3321] STATUS=0
:process_resources[3322] sddsrv_off=FALSE
:process_resources[3324] true
:process_resources[3326] : call rgpa, and it will tell us what to do next
:process_resources[3328] set -a
:process_resources[3329] clRGPA
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa
2023-01-28T18:00:06.069058 clrgpa
:clRGPA[+55] exit 0
:process_resources[3329] eval JOB_TYPE=RELEASE RESOURCE_GROUPS='"epprd_rg"' PRINCIPAL_ACTION='"RELEASE"' AUXILLIARY_ACTION='"NONE"'
:process_resources[1] JOB_TYPE=RELEASE
:process_resources[1] RESOURCE_GROUPS=epprd_rg
:process_resources[1] PRINCIPAL_ACTION=RELEASE
:process_resources[1] AUXILLIARY_ACTION=NONE
:process_resources[3330] RC=0
:process_resources[3331] set +a
:process_resources[3333] (( 0 != 0 ))
:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[3363] INFO_STRING=''
+epprd_rg:process_resources[3364] clnodename
+epprd_rg:process_resources[3373] ENV_VAR=GROUP_epprd_rg_epprda
+epprd_rg:process_resources[3374] eval 'echo $GROUP_epprd_rg_epprda'
+epprd_rg:process_resources[1] echo ISUPPREEVENT
+epprd_rg:process_resources[3374] read ENV_VAR
+epprd_rg:process_resources[3375] [[ ISUPPREEVENT == WILLBEUPPOSTEVENT ]]
+epprd_rg:process_resources[3379] [[ ISUPPREEVENT == ISUPPREEVENT ]]
+epprd_rg:process_resources[3380] INFO_STRING='|SOURCE=epprda'
+epprd_rg:process_resources[3381] IS_SERVICE_START=0
+epprd_rg:process_resources[3373] ENV_VAR=GROUP_epprd_rg_epprds
+epprd_rg:process_resources[3374] eval 'echo $GROUP_epprd_rg_epprds'
+epprd_rg:process_resources[1] echo
+epprd_rg:process_resources[3374] read ENV_VAR
+epprd_rg:process_resources[3375] [[ '' == WILLBEUPPOSTEVENT ]]
+epprd_rg:process_resources[3379] [[ '' == ISUPPREEVENT ]]
+epprd_rg:process_resources[3384] (( 0 == 0 && 1 ==0 ))
+epprd_rg:process_resources[3660] set_resource_group_state RELEASING
+epprd_rg:process_resources[set_resource_group_state:82] PS4_FUNC=set_resource_group_state
+epprd_rg:process_resources[set_resource_group_state:82] typeset PS4_FUNC
+epprd_rg:process_resources[set_resource_group_state:83] [[ high == high ]]
+epprd_rg:process_resources[set_resource_group_state:83] set -x
+epprd_rg:process_resources[set_resource_group_state:84] STAT=0
+epprd_rg:process_resources[set_resource_group_state:85] new_status=RELEASING
+epprd_rg:process_resources[set_resource_group_state:89] export GROUPNAME
+epprd_rg:process_resources[set_resource_group_state:90] [[ RELEASING != DOWN ]]
+epprd_rg:process_resources[set_resource_group_state:92] clchdaemons -d clstrmgr_scripts -t resource_locator -n epprda -o epprd_rg -v RELEASING
+epprd_rg:process_resources[set_resource_group_state:100] : Resource Manager Updates
+epprd_rg:process_resources[set_resource_group_state:111] amlog_trace '' 'acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:318] clcycle clavailability.log
+epprd_rg:process_resources[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:process_resources[amlog_trace:319] cltime
+epprd_rg:process_resources[amlog_trace:319] DATE=2023-01-28T18:00:06.114547
+epprd_rg:process_resources[amlog_trace:320] echo '|2023-01-28T18:00:06.114547|INFO: acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:process_resources[set_resource_group_state:112] cl_RMupdate releasing epprd_rg process_resources
2023-01-28T18:00:06.139042
2023-01-28T18:00:06.143586
+epprd_rg:process_resources[set_resource_group_state:153] return 0
+epprd_rg:process_resources[3661] RC=0
+epprd_rg:process_resources[3662] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:06.155780 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=APPLICATIONS ACTION=RELEASE ALL_APPLICATIONS='"epprd_app"' RESOURCE_GROUPS='"epprd_rg' '"' MISCDATA='""'
+epprd_rg:process_resources[1] JOB_TYPE=APPLICATIONS
+epprd_rg:process_resources[1] ACTION=RELEASE
+epprd_rg:process_resources[1] ALL_APPLICATIONS=epprd_app
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] MISCDATA=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ APPLICATIONS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ APPLICATIONS == ONLINE ]]
+epprd_rg:process_resources[3549] process_applications RELEASE
+epprd_rg:process_resources[process_applications:312] PS4_FUNC=process_applications
+epprd_rg:process_resources[process_applications:312] typeset PS4_FUNC
+epprd_rg:process_resources[process_applications:313] [[ high == high ]]
+epprd_rg:process_resources[process_applications:313] set -x
+epprd_rg:process_resources[process_applications:316] : Each subprocess will log to a file with this name and PID
+epprd_rg:process_resources[process_applications:318] TMP_FILE=/var/hacmp/log/.process_resources_applications.21954878
+epprd_rg:process_resources[process_applications:318] export TMP_FILE
+epprd_rg:process_resources[process_applications:320] rm -f '/var/hacmp/log/.process_resources_applications*'
+epprd_rg:process_resources[process_applications:322] WAITPIDS=''
+epprd_rg:process_resources[process_applications:323] LPAR_ACQUIRE_FAILED=0
+epprd_rg:process_resources[process_applications:324] LPAR_RELEASE_FAILED=0
+epprd_rg:process_resources[process_applications:325] START_STOP_FAILED=0
+epprd_rg:process_resources[process_applications:326] LIST_OF_APPS=epprd_app
+epprd_rg:process_resources[process_applications:329] : Acquire lpar resources in one-shot before starting applications
+epprd_rg:process_resources[process_applications:331] [[ RELEASE == ACQUIRE ]]
+epprd_rg:process_resources[process_applications:343] (( LPAR_ACQUIRE_FAILED == 0 ))
+epprd_rg:process_resources[process_applications:345] : Loop through all groups to start or stop applications
+epprd_rg:process_resources[process_applications:348] export GROUPNAME
+epprd_rg:process_resources[process_applications:351] : Break out application data
+epprd_rg:process_resources[process_applications:353] get_list_head epprd_app
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo epprd_app
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo epprd_app
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[process_applications:353] read LIST_OF_APPLICATIONS_FOR_RG
+epprd_rg:process_resources[process_applications:354] get_list_tail epprd_app
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo epprd_app
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[process_applications:354] read ALL_APPLICATIONS
+epprd_rg:process_resources[process_applications:356] get_list_head
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[process_applications:356] read MISCDATA_FOR_RG
+epprd_rg:process_resources[process_applications:357] get_list_tail
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[process_applications:357] read MISCDATA
+epprd_rg:process_resources[process_applications:359] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[process_applications:363] TMPLIST=''
+epprd_rg:process_resources[process_applications:364] print epprd_app
+epprd_rg:process_resources[process_applications:364] set -A appnames epprd_app
+epprd_rg:process_resources[process_applications:366] (( cnt=0))
+epprd_rg:process_resources[process_applications:366] (( cnt < 1))
+epprd_rg:process_resources[process_applications:367] TMPLIST='epprd_app '
+epprd_rg:process_resources[process_applications:368] LIST_OF_APPLICATIONS_FOR_RG=epprd_app
+epprd_rg:process_resources[process_applications:366] ((cnt++ ))
+epprd_rg:process_resources[process_applications:366] (( cnt < 1))
+epprd_rg:process_resources[process_applications:371] LIST_OF_APPLICATIONS_FOR_RG='epprd_app '
+epprd_rg:process_resources[process_applications:374] APPLICATIONS='epprd_app '
+epprd_rg:process_resources[process_applications:374] export APPLICATIONS
+epprd_rg:process_resources[process_applications:375] MISC_DATA=''
+epprd_rg:process_resources[process_applications:375] export MISC_DATA
+epprd_rg:process_resources[process_applications:378] : Now call start_or_stop_applications_for_rg to do the app start/stop.
+epprd_rg:process_resources[process_applications:381] start_or_stop_applications_for_rg RELEASE /var/hacmp/log/.process_resources_applications.21954878.epprd_rg
+epprd_rg:process_resources[start_or_stop_applications_for_rg:248] PS4_FUNC=start_or_stop_applications_for_rg
+epprd_rg:process_resources[start_or_stop_applications_for_rg:248] typeset PS4_FUNC
+epprd_rg:process_resources[start_or_stop_applications_for_rg:249] [[ high == high ]]
+epprd_rg:process_resources[start_or_stop_applications_for_rg:249] set -x
+epprd_rg:process_resources[start_or_stop_applications_for_rg:251] [[ RELEASE == ACQUIRE ]]
+epprd_rg:process_resources[start_or_stop_applications_for_rg:255] cmd_to_execute=stop_server
+epprd_rg:process_resources[start_or_stop_applications_for_rg:259] : File name to store our exit status
+epprd_rg:process_resources[start_or_stop_applications_for_rg:261] STATUS_FILE=/var/hacmp/log/.process_resources_applications.21954878.epprd_rg
+epprd_rg:process_resources[start_or_stop_applications_for_rg:264] : Use clcallev to run the event
+epprd_rg:process_resources[start_or_stop_applications_for_rg:266] clcallev stop_server 'epprd_app '
+epprd_rg:process_resources[process_applications:384] : Add PID of the last bg start_or_stop_applications_for_rg process to WAITPIDS.
+epprd_rg:process_resources[process_applications:386] WAITPIDS=' 26018292'
+epprd_rg:process_resources[process_applications:390] : Wait for the start_or_stop_applications_for_rg PIDs to finish.
+epprd_rg:process_resources[process_applications:393] wait 26018292
Jan 28 2023 18:00:06 EVENT START: stop_server epprd_app
|2023-01-28T18:00:06|22169|EVENT START: stop_server epprd_app |
+epprd_rg:stop_server[+59] version=%I%
+epprd_rg:stop_server[+62] STATUS=0
+epprd_rg:stop_server[+66] [ ! -n ]
+epprd_rg:stop_server[+68] EMULATE=REAL
+epprd_rg:stop_server[+71] PROC_RES=false
+epprd_rg:stop_server[+75] [[ APPLICATIONS != 0 ]]
+epprd_rg:stop_server[+75] [[ APPLICATIONS != GROUP ]]
+epprd_rg:stop_server[+76] PROC_RES=true
+epprd_rg:stop_server[+79] typeset WPARNAME WPARDIR EXEC
+epprd_rg:stop_server[+80] WPARDIR=
+epprd_rg:stop_server[+81] EXEC=
+epprd_rg:stop_server[+83] typeset -i rc=0
+epprd_rg:stop_server[+84] +epprd_rg:stop_server[+84] clwparname epprd_rg
+epprd_rg:clwparname[38] version=1.3.1.1
+epprd_rg:clwparname[44] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clwparname[44] [[ -z '' ]]
+epprd_rg:clwparname[44] exit 0
WPARNAME=
+epprd_rg:stop_server[+85] rc=0
+epprd_rg:stop_server[+87] set -u
+epprd_rg:stop_server[+90] ALLSERVERS=All_servers
+epprd_rg:stop_server[+91] [ REAL = EMUL ]
+epprd_rg:stop_server[+96] cl_RMupdate resource_releasing All_servers stop_server
2023-01-28T18:00:06.306007
2023-01-28T18:00:06.310267
+epprd_rg:stop_server[+101] (( 0 == 0 ))
+epprd_rg:stop_server[+101] [[ -n ]]
+epprd_rg:stop_server[+120] +epprd_rg:stop_server[+120] cllsserv -cn epprd_app
+epprd_rg:stop_server[+120] cut -d: -f3
STOP=/etc/hacmp/epprd_stop.sh
+epprd_rg:stop_server[+121] +epprd_rg:stop_server[+121] echo /etc/hacmp/epprd_stop.sh
+epprd_rg:stop_server[+121] cut -d -f1
STOP_SCRIPT=/etc/hacmp/epprd_stop.sh
+epprd_rg:stop_server[+123] PATTERN=epprda epprd_app
+epprd_rg:stop_server[+123] [[ -n ]]
+epprd_rg:stop_server[+123] [[ -z ]]
+epprd_rg:stop_server[+123] [[ -x /etc/hacmp/epprd_stop.sh ]]
+epprd_rg:stop_server[+133] [ REAL = EMUL ]
+epprd_rg:stop_server[+139] amlog_trace Stopping application controller|epprd_app
+epprd_rg:stop_server[+55] clcycle clavailability.log
+epprd_rg:stop_server[+55] 1> /dev/null 2>& 1
+epprd_rg:stop_server[+55] +epprd_rg:stop_server[+55] cltime
DATE=2023-01-28T18:00:06.345425
+epprd_rg:stop_server[+55] echo |2023-01-28T18:00:06.345425|INFO: Stopping application controller|epprd_app
+epprd_rg:stop_server[+55] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:stop_server[+140] /etc/hacmp/epprd_stop.sh
+epprd_rg:stop_server[+140] ODMDIR=/etc/objrepos
+epprd_rg:stop_server[+141] rc=0
+epprd_rg:stop_server[+143] (( rc != 0 ))
+epprd_rg:stop_server[+151] amlog_trace Stopping application controller|epprd_app
+epprd_rg:stop_server[+55] clcycle clavailability.log
+epprd_rg:stop_server[+55] 1> /dev/null 2>& 1
+epprd_rg:stop_server[+55] +epprd_rg:stop_server[+55] cltime
DATE=2023-01-28T18:00:06.374249
+epprd_rg:stop_server[+55] echo |2023-01-28T18:00:06.374249|INFO: Stopping application controller|epprd_app
+epprd_rg:stop_server[+55] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:stop_server[+174] ALLNOERRSERV=All_nonerror_servers
+epprd_rg:stop_server[+175] [ REAL = EMUL ]
+epprd_rg:stop_server[+180] cl_RMupdate resource_down All_nonerror_servers stop_server
2023-01-28T18:00:06.396665
2023-01-28T18:00:06.400904
+epprd_rg:stop_server[+183] exit 0
Jan 28 2023 18:00:06 EVENT COMPLETED: stop_server epprd_app 0
|2023-01-28T18:00:06|22169|EVENT COMPLETED: stop_server epprd_app 0|
+epprd_rg:process_resources[start_or_stop_applications_for_rg:267] RC=0
+epprd_rg:process_resources[start_or_stop_applications_for_rg:269] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[start_or_stop_applications_for_rg:279] (( 0 != 0 ))
+epprd_rg:process_resources[start_or_stop_applications_for_rg:291] : Store the result for later accumulation
+epprd_rg:process_resources[start_or_stop_applications_for_rg:293] print 'epprd_rg 0'
+epprd_rg:process_resources[start_or_stop_applications_for_rg:293] 1>> /var/hacmp/log/.process_resources_applications.21954878.epprd_rg
+epprd_rg:process_resources[process_applications:396] : Look at all the status files to see if any were unsuccessful
+epprd_rg:process_resources[process_applications:399] cat /var/hacmp/log/.process_resources_applications.21954878.epprd_rg
+epprd_rg:process_resources[process_applications:399] read skip SUCCESS rest
+epprd_rg:process_resources[process_applications:401] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:411] rm -f /var/hacmp/log/.process_resources_applications.21954878.epprd_rg
+epprd_rg:process_resources[process_applications:416] : Release lpar resources in one-shot now that applications are stopped
+epprd_rg:process_resources[process_applications:418] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[process_applications:420] GROUPNAME=epprd_rg
+epprd_rg:process_resources[process_applications:420] export GROUPNAME
+epprd_rg:process_resources[process_applications:421] clmanageroha -o release -s -l epprd_app
+epprd_rg:process_resources[process_applications:421] 3>& 2
+epprd_rg:clmanageroha[318] : version='@(#)' 5881272 43haes/usr/sbin/cluster/events/clmanageroha.sh, 61aha_r726, 2205A_aha726, May 16 2022 12:15 PM
+epprd_rg:clmanageroha[321] clodmget -n -f connection_type HACMPhmcparam
+epprd_rg:clmanageroha[321] CONN_TYPE=0
+epprd_rg:clmanageroha[321] typeset -i CONN_TYPE
+epprd_rg:clmanageroha[323] clodmget -q name='epprda and object like POWERVS_*' -nf name HACMPnode
+epprd_rg:clmanageroha[323] 2> /dev/null
+epprd_rg:clmanageroha[323] [[ -n '' ]]
+epprd_rg:clmanageroha[326] export CONN_TYPE
+epprd_rg:clmanageroha[331] roha_session_open -o release -s -l epprd_app
+epprd_rg:clmanageroha[roha_session_open:131] roha_session.id=26018298
+epprd_rg:clmanageroha[roha_session_open:132] date
+epprd_rg:clmanageroha[roha_session_open:132] LC_ALL=C
+epprd_rg:clmanageroha[roha_session_open:132] roha_session_log 'Open session 26018298 at Sat Jan 28 18:00:06 KORST 2023'
[ROHALOG:26018298:(0.066)] Open session 26018298 at Sat Jan 28 18:00:06 KORST 2023
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:146] roha_session.operation=release
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:143] roha_session.systemmirror_mode=1
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:149] roha_session.optimal_apps=epprd_app
+epprd_rg:clmanageroha[roha_session_open:137] getopts :cso:l:t opt
+epprd_rg:clmanageroha[roha_session_open:163] [[ release != @(acquire|release|adjust) ]]
+epprd_rg:clmanageroha[roha_session_open:168] no_roha_apps=0
+epprd_rg:clmanageroha[roha_session_open:168] typeset -i no_roha_apps
+epprd_rg:clmanageroha[roha_session_open:169] need_explicit_res_rel=0
+epprd_rg:clmanageroha[roha_session_open:169] typeset -i need_explicit_res_rel
+epprd_rg:clmanageroha[roha_session_open:187] [[ -n epprd_app ]]
+epprd_rg:clmanageroha[roha_session_open:187] sort
+epprd_rg:clmanageroha[roha_session_open:187] clmgr q roha
+epprd_rg:clmanageroha[roha_session_open:187] uniq -d
+epprd_rg:clmanageroha[roha_session_open:187] echo epprd_app
+epprd_rg:clmanageroha[roha_session_open:187] sort -u
+epprd_rg:clmanageroha[roha_session_open:187] echo '\nepprd_app'
+epprd_rg:clmanageroha[roha_session_open:187] [[ -z '' ]]
+epprd_rg:clmanageroha[roha_session_open:189] roha_session_log 'INFO: No ROHA configured on applications.\n'
[ROHALOG:26018298:(0.519)] INFO: No ROHA configured on applications.
[ROHALOG:26018298:(0.519)]
+epprd_rg:clmanageroha[roha_session_open:190] no_roha_apps=1
+epprd_rg:clmanageroha[roha_session_open:195] read_tunables
+epprd_rg:clmanageroha[roha_session_open:196] echo ''
+epprd_rg:clmanageroha[roha_session_open:196] grep -q epprda
+epprd_rg:clmanageroha[roha_session_open:197] (( 1 == 0 ))
+epprd_rg:clmanageroha[roha_session_open:202] (( 1 == 1 ))
+epprd_rg:clmanageroha[roha_session_open:203] roha_session_read_odm_dynresop DLPAR_MEM
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_MEM -nf value HACMPdynresop
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] out=''
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
+epprd_rg:clmanageroha[roha_session_open:203] (( 0 == 0.00 ))
+epprd_rg:clmanageroha[roha_session_open:204] roha_session_read_odm_dynresop DLPAR_PROCS
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_PROCS -nf value HACMPdynresop
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] out=''
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
+epprd_rg:clmanageroha[roha_session_open:204] (( 0 == 0 ))
+epprd_rg:clmanageroha[roha_session_open:205] roha_session_read_odm_dynresop DLPAR_PROC_UNITS
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] clodmget -q key=DLPAR_PROC_UNITS -nf value HACMPdynresop
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] ODMDIR=/etc/es/objrepos
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:816] out=''
+epprd_rg:clmanageroha[roha_session_read_odm_dynresop:817] print -- 0
+epprd_rg:clmanageroha[roha_session_open:205] (( 0 == 0.00 ))
+epprd_rg:clmanageroha[roha_session_open:206] roha_session_log 'INFO: Nothing to be done.\n'
[ROHALOG:26018298:(0.579)] INFO: Nothing to be done.
[ROHALOG:26018298:(0.579)]
+epprd_rg:clmanageroha[roha_session_open:207] exit 0
+epprd_rg:process_resources[process_applications:422] RC=0
+epprd_rg:process_resources[process_applications:423] (( 0 != 0 ))
+epprd_rg:process_resources[process_applications:433] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:434] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:435] [[ 0 != 0 ]]
+epprd_rg:process_resources[process_applications:439] return 0
+epprd_rg:process_resources[3550] RC=0
+epprd_rg:process_resources[3551] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[3553] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:07.084178 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=MOUNT_FILESYSTEMS ACTION=RELEASE FILE_SYSTEMS='"/board;/board_org"' RESOURCE_GROUPS='"epprd_rg' '"' NFS_NETWORKS='""' NFS_HOSTS='""' IP_LABELS='""'
+epprd_rg:process_resources[1] JOB_TYPE=MOUNT_FILESYSTEMS
+epprd_rg:process_resources[1] ACTION=RELEASE
+epprd_rg:process_resources[1] FILE_SYSTEMS='/board;/board_org'
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] NFS_NETWORKS=''
+epprd_rg:process_resources[1] NFS_HOSTS=''
+epprd_rg:process_resources[1] IP_LABELS=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ MOUNT_FILESYSTEMS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ MOUNT_FILESYSTEMS == ONLINE ]]
+epprd_rg:process_resources[3612] [[ RELEASE == ACQUIRE ]]
+epprd_rg:process_resources[3616] unmount_nfs_filesystems
+epprd_rg:process_resources[unmount_nfs_filesystems:1397] PS4_FUNC=unmount_nfs_filesystems
+epprd_rg:process_resources[unmount_nfs_filesystems:1397] typeset PS4_FUNC
+epprd_rg:process_resources[unmount_nfs_filesystems:1398] [[ high == high ]]
+epprd_rg:process_resources[unmount_nfs_filesystems:1398] set -x
+epprd_rg:process_resources[unmount_nfs_filesystems:1400] STAT=0
+epprd_rg:process_resources[unmount_nfs_filesystems:1402] cl_deactivate_nfs
+epprd_rg:cl_deactivate_nfs[+75] [[ high == high ]]
+epprd_rg:cl_deactivate_nfs[+75] version=1.2.5.1 $Source$
+epprd_rg:cl_deactivate_nfs[+77] STATUS=0
+epprd_rg:cl_deactivate_nfs[+78] PIDLIST=
+epprd_rg:cl_deactivate_nfs[+80] set -u
+epprd_rg:cl_deactivate_nfs[+154] PROC_RES=false
+epprd_rg:cl_deactivate_nfs[+158] [[ MOUNT_FILESYSTEMS != 0 ]]
+epprd_rg:cl_deactivate_nfs[+158] [[ MOUNT_FILESYSTEMS != GROUP ]]
+epprd_rg:cl_deactivate_nfs[+159] PROC_RES=true
+epprd_rg:cl_deactivate_nfs[+175] export GROUPNAME
+epprd_rg:cl_deactivate_nfs[+175] [[ true == true ]]
+epprd_rg:cl_deactivate_nfs[+178] get_list_head /board;/board_org
+epprd_rg:cl_deactivate_nfs[+178] read UNSORTED_FILELIST
+epprd_rg:cl_deactivate_nfs[+179] get_list_tail /board;/board_org
+epprd_rg:cl_deactivate_nfs[+179] read FILE_SYSTEMS
+epprd_rg:cl_deactivate_nfs[+186] +epprd_rg:cl_deactivate_nfs[+186] /bin/echo /board;/board_org
+epprd_rg:cl_deactivate_nfs[+186] /bin/sort -r
FILELIST=/board;/board_org
+epprd_rg:cl_deactivate_nfs[+188] echo /board;/board_org
+epprd_rg:cl_deactivate_nfs[+188] grep -q \;/
+epprd_rg:cl_deactivate_nfs[+189] CROSSMOUNT=1
+epprd_rg:cl_deactivate_nfs[+189] [[ 1 != 0 ]]
+epprd_rg:cl_deactivate_nfs[+194] +epprd_rg:cl_deactivate_nfs[+194] /bin/echo /board;/board_org
+epprd_rg:cl_deactivate_nfs[+194] /bin/sort -k 1,1r -t;
MNT=/board;/board_org
+epprd_rg:cl_deactivate_nfs[+200] ALLNFS=All_nfs_mounts
+epprd_rg:cl_deactivate_nfs[+201] cl_RMupdate resource_releasing All_nfs_mounts cl_deactivate_nfs
2023-01-28T18:00:07.139671
2023-01-28T18:00:07.144938
+epprd_rg:cl_deactivate_nfs[+203] +epprd_rg:cl_deactivate_nfs[+203] odmget -q name=RECOVERY_METHOD AND group=epprd_rg HACMPresource
+epprd_rg:cl_deactivate_nfs[+203] grep value
+epprd_rg:cl_deactivate_nfs[+203] awk {print $3}
+epprd_rg:cl_deactivate_nfs[+203] sed s/"//g
METHOD=sequential
+epprd_rg:cl_deactivate_nfs[+206] typeset PS4_LOOP=/board;/board_org
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+207] (( 1 != 0 ))
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+209] +epprd_rg:cl_deactivate_nfs:/board;/board_org[+209] cut -f2 -d;
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+209] echo /board;/board_org
fs=/board_org
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+210] +epprd_rg:cl_deactivate_nfs:/board;/board_org[+210] cut -f1 -d;
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+210] echo /board;/board_org
mnt=/board
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+220] +epprd_rg:cl_deactivate_nfs:/board;/board_org[+220] awk -v MFS=/board BEGIN {MFS=sprintf("^%s$", MFS)} \
match($4, "nfs") && match($3, MFS) {print $2}
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+220] mount
f=/board_org
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+220] [[ /board_org == /board_org ]]
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+227] pid=
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+227] [[ sequential == sequential ]]
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+227] [[ rg_move == node_down ]]
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+227] [[ rg_move == rg_move ]]
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+252] pid=27525458
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+252] [[ -n 27525458 ]]
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+251] do_umount /board
+epprd_rg:cl_deactivate_nfs(0):/board;/board_org[do_umount+4] typeset fs=/board
+epprd_rg:cl_deactivate_nfs(0):/board;/board_org[do_umount+31] cl_nfskill -k -u /board
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+264] grep -qw 27525458
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+264] echo
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+267] (( 1 != 0 ))
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+268] PIDLIST= 27525458
+epprd_rg:cl_deactivate_nfs:/board;/board_org[+274] unset PS4_LOOP
+epprd_rg:cl_deactivate_nfs[+279] wait 27525458
+epprd_rg:cl_deactivate_nfs(0):/board;/board_org[do_umount+33] sleep 2
+epprd_rg:cl_deactivate_nfs(2):/board;/board_org[do_umount+34] cl_nfskill -k -u /board
+epprd_rg:cl_deactivate_nfs(2):/board;/board_org[do_umount+36] sleep 2
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+39] amlog_trace Deactivating NFS|/board
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] clcycle clavailability.log
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] +epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] cltime
DATE=2023-01-28T18:00:11.194772
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] echo |2023-01-28T18:00:11.194772|INFO: Deactivating NFS|/board
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+40] typeset COUNT=20
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+41] true
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+42] date +%h %d %H:%M:%S.000
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+42] : Attempt 1 of 20 to unmount at Jan 28 18:00:11.000
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+43] umount -f /board
forced unmount of /board
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+44] (( 0 != 0 ))
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+61] amlog_trace Deactivating NFS|/board
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] clcycle clavailability.log
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] +epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] cltime
DATE=2023-01-28T18:00:11.235467
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] echo |2023-01-28T18:00:11.235467|INFO: Deactivating NFS|/board
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+49] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+62] break
+epprd_rg:cl_deactivate_nfs(4):/board;/board_org[do_umount+65] return 0
+epprd_rg:cl_deactivate_nfs[+280] (( 0 != 0 ))
+epprd_rg:cl_deactivate_nfs[+291] ALLNOERRNFS=All_nonerror_nfs_mounts
+epprd_rg:cl_deactivate_nfs[+292] cl_RMupdate resource_down All_nonerror_nfs_mounts cl_deactivate_nfs
2023-01-28T18:00:11.258600
2023-01-28T18:00:11.262894
+epprd_rg:cl_deactivate_nfs[+295] exit 0
+epprd_rg:process_resources[unmount_nfs_filesystems:1403] RC=0
+epprd_rg:process_resources[unmount_nfs_filesystems:1406] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[unmount_nfs_filesystems:1420] (( 0 != 0 ))
+epprd_rg:process_resources[unmount_nfs_filesystems:1426] return 0
+epprd_rg:process_resources[3617] RC=0
+epprd_rg:process_resources[3618] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[3620] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:11.275642 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=EXPORT_FILESYSTEMS ACTION=RELEASE EXPORT_FILE_SYSTEMS='"/board_org"' EXPORT_FILE_SYSTEMS_V4='""' RESOURCE_GROUPS='"epprd_rg' '"' STABLE_STORAGE_PATH='""' IP_LABELS='""' DAEMONS='"NFS' '"'
+epprd_rg:process_resources[1] JOB_TYPE=EXPORT_FILESYSTEMS
+epprd_rg:process_resources[1] ACTION=RELEASE
+epprd_rg:process_resources[1] EXPORT_FILE_SYSTEMS=/board_org
+epprd_rg:process_resources[1] EXPORT_FILE_SYSTEMS_V4=''
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] STABLE_STORAGE_PATH=''
+epprd_rg:process_resources[1] IP_LABELS=''
+epprd_rg:process_resources[1] DAEMONS='NFS '
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ EXPORT_FILESYSTEMS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ EXPORT_FILESYSTEMS == ONLINE ]]
+epprd_rg:process_resources[3595] [[ RELEASE == ACQUIRE ]]
+epprd_rg:process_resources[3599] unexport_filesystems
+epprd_rg:process_resources[unexport_filesystems:1576] PS4_FUNC=unexport_filesystems
+epprd_rg:process_resources[unexport_filesystems:1576] typeset PS4_FUNC
+epprd_rg:process_resources[unexport_filesystems:1577] [[ high == high ]]
+epprd_rg:process_resources[unexport_filesystems:1577] set -x
+epprd_rg:process_resources[unexport_filesystems:1578] STAT=0
+epprd_rg:process_resources[unexport_filesystems:1579] NFSSTOPPED=0
+epprd_rg:process_resources[unexport_filesystems:1580] RPCSTOPPED=0
+epprd_rg:process_resources[unexport_filesystems:1582] export NFSSTOPPED
+epprd_rg:process_resources[unexport_filesystems:1585] : For NFSv4, cl_unexport_fs will use STABLE_STORAGE_PATH, which is set by
+epprd_rg:process_resources[unexport_filesystems:1586] : clRGPA and can have colon-separated values for multiple RGs.
+epprd_rg:process_resources[unexport_filesystems:1587] : We will save off clRGPA values in stable_storage_path and then extract
+epprd_rg:process_resources[unexport_filesystems:1588] : each RG into STABLE_STORAGE_PATH for cl_unexport_fs.
+epprd_rg:process_resources[unexport_filesystems:1590] stable_storage_path=''
+epprd_rg:process_resources[unexport_filesystems:1590] typeset stable_storage_path
+epprd_rg:process_resources[unexport_filesystems:1594] export GROUPNAME
+epprd_rg:process_resources[unexport_filesystems:1596] get_list_head /board_org
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo /board_org
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo /board_org
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[unexport_filesystems:1596] read LIST_OF_EXPORT_FILE_SYSTEMS_FOR_RG
+epprd_rg:process_resources[unexport_filesystems:1597] get_list_tail /board_org
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo /board_org
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[unexport_filesystems:1597] read EXPORT_FILE_SYSTEMS
+epprd_rg:process_resources[unexport_filesystems:1599] get_list_head
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[unexport_filesystems:1599] read LIST_OF_EXPORT_FILE_SYSTEMS_V4_FOR_RG
+epprd_rg:process_resources[unexport_filesystems:1600] get_list_tail
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[unexport_filesystems:1600] read EXPORT_FILE_SYSTEMS_V4
+epprd_rg:process_resources[unexport_filesystems:1601] get_list_head
+epprd_rg:process_resources[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources[get_list_head:60] set -x
+epprd_rg:process_resources[get_list_head:61] echo
+epprd_rg:process_resources[get_list_head:61] read listhead listtail
+epprd_rg:process_resources[get_list_head:61] IFS=:
+epprd_rg:process_resources[get_list_head:62] echo
+epprd_rg:process_resources[get_list_head:62] tr , ' '
+epprd_rg:process_resources[unexport_filesystems:1601] read STABLE_STORAGE_PATH
+epprd_rg:process_resources[unexport_filesystems:1602] get_list_tail
+epprd_rg:process_resources[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources[get_list_tail:68] set -x
+epprd_rg:process_resources[get_list_tail:69] echo
+epprd_rg:process_resources[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources[get_list_tail:69] IFS=:
+epprd_rg:process_resources[get_list_tail:70] echo
+epprd_rg:process_resources[unexport_filesystems:1602] read stable_storage_path
+epprd_rg:process_resources[unexport_filesystems:1604] cl_unexport_fs /board_org ''
+epprd_rg:cl_unexport_fs[136] version=%I%
+epprd_rg:cl_unexport_fs[139] . /usr/es/sbin/cluster/events/utils/cl_nfs_utils
+epprd_rg:cl_unexport_fs[98] PROGNAME=cl_unexport_fs
+epprd_rg:cl_unexport_fs[99] [[ high == high ]]
+epprd_rg:cl_unexport_fs[101] set -x
+epprd_rg:cl_unexport_fs[102] version=%I
+epprd_rg:cl_unexport_fs[105] cl_exports_data=''
+epprd_rg:cl_unexport_fs[105] typeset cl_exports_data
+epprd_rg:cl_unexport_fs[106] EXPFILE=/usr/es/sbin/cluster/etc/exports
+epprd_rg:cl_unexport_fs[141] UNEXPORT_V3=/board_org
+epprd_rg:cl_unexport_fs[142] UNEXPORT_V4=''
+epprd_rg:cl_unexport_fs[144] STATUS=0
+epprd_rg:cl_unexport_fs[146] PROC_RES=false
+epprd_rg:cl_unexport_fs[150] [[ EXPORT_FILESYSTEMS != 0 ]]
+epprd_rg:cl_unexport_fs[150] [[ EXPORT_FILESYSTEMS != GROUP ]]
+epprd_rg:cl_unexport_fs[151] PROC_RES=true
+epprd_rg:cl_unexport_fs[154] set -u
+epprd_rg:cl_unexport_fs[156] (( 2 != 2 ))
+epprd_rg:cl_unexport_fs[162] [[ __AIX__ == __AIX__ ]]
+epprd_rg:cl_unexport_fs[164] oslevel -r
+epprd_rg:cl_unexport_fs[164] cut -c1-2
+epprd_rg:cl_unexport_fs[164] (( 72 > 52 ))
+epprd_rg:cl_unexport_fs[166] FORCE=-F
+epprd_rg:cl_unexport_fs[180] EXPFILE=/usr/es/sbin/cluster/etc/exports
+epprd_rg:cl_unexport_fs[181] DARE_EVENT=reconfig_resource_release
+epprd_rg:cl_unexport_fs[184] unexport_v4=''
+epprd_rg:cl_unexport_fs[185] [[ -z '' ]]
+epprd_rg:cl_unexport_fs[185] [[ rg_move == reconfig_resource_release ]]
+epprd_rg:cl_unexport_fs[196] [[ -z '' ]]
+epprd_rg:cl_unexport_fs[196] [[ -r /usr/es/sbin/cluster/etc/exports ]]
+epprd_rg:cl_unexport_fs[198] unexport_v3=''
+epprd_rg:cl_unexport_fs[204] getline_exports /board_org
+epprd_rg:cl_unexport_fs[getline_exports:44] cl_exports_data=''
+epprd_rg:cl_unexport_fs[getline_exports:45] line=''
+epprd_rg:cl_unexport_fs[getline_exports:45] typeset line
+epprd_rg:cl_unexport_fs[getline_exports:46] flag=0
+epprd_rg:cl_unexport_fs[getline_exports:46] typeset -i flag
+epprd_rg:cl_unexport_fs[getline_exports:47] fs=/board_org
+epprd_rg:cl_unexport_fs[getline_exports:49] [[ -z /board_org ]]
+epprd_rg:cl_unexport_fs[getline_exports:54] [[ -r /usr/es/sbin/cluster/etc/exports ]]
+epprd_rg:cl_unexport_fs[getline_exports:56] cat /usr/es/sbin/cluster/etc/exports
+epprd_rg:cl_unexport_fs[getline_exports:56] read -r line
+epprd_rg:cl_unexport_fs[getline_exports:59] line='/sapmnt/EPP -sec=sys:krb5p:krb5i:krb5:dh,rw,access=epprdap,root=epprdap'
+epprd_rg:cl_unexport_fs[getline_exports:60] line='/sapmnt/EPP -sec=sys:krb5p:krb5i:krb5:dh,rw,access=epprdap,root=epprdap'
+epprd_rg:cl_unexport_fs[getline_exports:63] [[ '/sapmnt/EPP -sec=sys:krb5p:krb5i:krb5:dh,rw,access=epprdap,root=epprdap' == #* ]]
+epprd_rg:cl_unexport_fs[getline_exports:68] echo '/sapmnt/EPP -sec=sys:krb5p:krb5i:krb5:dh,rw,access=epprdap,root=epprdap'
+epprd_rg:cl_unexport_fs[getline_exports:68] grep -q '^[[:space:]]*/board_org[[:space:]]'
+epprd_rg:cl_unexport_fs[getline_exports:69] (( 1 == 0 ))
+epprd_rg:cl_unexport_fs[getline_exports:74] [[ 0 == 1 ]]
+epprd_rg:cl_unexport_fs[getline_exports:56] read -r line
+epprd_rg:cl_unexport_fs[getline_exports:89] return 0
+epprd_rg:cl_unexport_fs[205] export_line=''
+epprd_rg:cl_unexport_fs[210] echo
+epprd_rg:cl_unexport_fs[210] awk '{ for (i=2; i<=NF; i++) printf $i " "; print "" }'
+epprd_rg:cl_unexport_fs[211] cut -d- -f2-
+epprd_rg:cl_unexport_fs[211] tr , ' '
+epprd_rg:cl_unexport_fs[210] options=''
+epprd_rg:cl_unexport_fs[217] vers_missing=1
+epprd_rg:cl_unexport_fs[240] (( vers_missing ))
+epprd_rg:cl_unexport_fs[240] unexport_v3=' /board_org'
+epprd_rg:cl_unexport_fs[243] UNEXPORT_V3=' /board_org'
+epprd_rg:cl_unexport_fs[244] UNEXPORT_V4=''
+epprd_rg:cl_unexport_fs[247] hasrv=''
+epprd_rg:cl_unexport_fs[249] [[ -z '' ]]
+epprd_rg:cl_unexport_fs[251] query=name='STABLE_STORAGE_PATH AND group=epprd_rg'
+epprd_rg:cl_unexport_fs[252] odmget -q name='STABLE_STORAGE_PATH AND group=epprd_rg' HACMPresource
+epprd_rg:cl_unexport_fs[252] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+epprd_rg:cl_unexport_fs[252] STABLE_STORAGE_PATH=''
+epprd_rg:cl_unexport_fs[256] [[ -z '' ]]
+epprd_rg:cl_unexport_fs[258] STABLE_STORAGE_PATH=/var/adm/nfsv4.hacmp/epprd_rg
+epprd_rg:cl_unexport_fs[261] [[ -z '' ]]
+epprd_rg:cl_unexport_fs[263] query=name='SERVICE_LABEL AND group=epprd_rg'
+epprd_rg:cl_unexport_fs[264] sed -n $'s/^[ \t]*value = "\\(.*\\)"/\\1/p'
+epprd_rg:cl_unexport_fs[264] odmget -q name='SERVICE_LABEL AND group=epprd_rg' HACMPresource
+epprd_rg:cl_unexport_fs[264] SERVICE_LABEL=epprd
+epprd_rg:cl_unexport_fs[268] ps -eo args
+epprd_rg:cl_unexport_fs[268] grep -w nfsd
+epprd_rg:cl_unexport_fs[268] grep -qw -- '-gp on'
+epprd_rg:cl_unexport_fs[272] gp=off
+epprd_rg:cl_unexport_fs[275] /usr/sbin/bootinfo -K
+epprd_rg:cl_unexport_fs[275] KERNEL_BITS=64
+epprd_rg:cl_unexport_fs[277] [[ off == on ]]
+epprd_rg:cl_unexport_fs[282] NFSv4_REGISTERED=0
+epprd_rg:cl_unexport_fs[286] V3=:2:3
+epprd_rg:cl_unexport_fs[287] V4=:4
+epprd_rg:cl_unexport_fs[289] [[ rg_move != reconfig_resource_release ]]
+epprd_rg:cl_unexport_fs[290] [[ rg_move != release_vg_fs ]]
+epprd_rg:cl_unexport_fs[298] [[ -n '' ]]
+epprd_rg:cl_unexport_fs[321] V3=''
+epprd_rg:cl_unexport_fs[322] V4=''
+epprd_rg:cl_unexport_fs[326] ALLEXPORTS=All_exports
+epprd_rg:cl_unexport_fs[328] cl_RMupdate resource_releasing All_exports cl_unexport_fs
2023-01-28T18:00:11.504811
2023-01-28T18:00:11.509091
+epprd_rg:cl_unexport_fs[330] tr ' ' '\n'
+epprd_rg:cl_unexport_fs[330] echo /board_org
+epprd_rg:cl_unexport_fs[330] sort
+epprd_rg:cl_unexport_fs[330] FILESYSTEM_LIST=/board_org
+epprd_rg:cl_unexport_fs[334] v3=''
+epprd_rg:cl_unexport_fs[335] v4=''
+epprd_rg:cl_unexport_fs[336] root=''
+epprd_rg:cl_unexport_fs[337] old_options=''
+epprd_rg:cl_unexport_fs[338] new_options=''
+epprd_rg:cl_unexport_fs[340] exportfs
+epprd_rg:cl_unexport_fs[340] grep '^[[:space:]]*/board_org[[:space:]]'
+epprd_rg:cl_unexport_fs[340] export_line='/board_org -root=epprd:epprda:epprds'
+epprd_rg:cl_unexport_fs[342] [[ -z '/board_org -root=epprd:epprda:epprds' ]]
+epprd_rg:cl_unexport_fs[344] echo /board_org -root=epprd:epprda:epprds
+epprd_rg:cl_unexport_fs[344] cut '-d ' -f2-
+epprd_rg:cl_unexport_fs[344] cut -d- -f2-
+epprd_rg:cl_unexport_fs[344] tr , ' '
+epprd_rg:cl_unexport_fs[344] old_options=root=epprd:epprda:epprds
+epprd_rg:cl_unexport_fs[365] new_options=,root=epprd:epprda:epprds
+epprd_rg:cl_unexport_fs[371] [[ -z '' ]]
+epprd_rg:cl_unexport_fs[371] v3=''
+epprd_rg:cl_unexport_fs[377] NFS_VER3=''
+epprd_rg:cl_unexport_fs[380] [[ /board_org == /board_org ]]
+epprd_rg:cl_unexport_fs[380] v3=''
+epprd_rg:cl_unexport_fs[380] NFS_VER3=3
+epprd_rg:cl_unexport_fs[380] break
+epprd_rg:cl_unexport_fs[382] NFS_VER4=''
+epprd_rg:cl_unexport_fs[387] [[ '' == 4 ]]
+epprd_rg:cl_unexport_fs[400] echo ''
+epprd_rg:cl_unexport_fs[400] cut -d: -f2-
+epprd_rg:cl_unexport_fs[400] vers=''
+epprd_rg:cl_unexport_fs[402] [[ -z '' ]]
+epprd_rg:cl_unexport_fs[404] [[ '' == 4 ]]
+epprd_rg:cl_unexport_fs[408] exportfs -i -u -F /board_org
exportfs: unexported /board_org
+epprd_rg:cl_unexport_fs[410] (( 0 != 0 ))
+epprd_rg:cl_unexport_fs[417] continue
+epprd_rg:cl_unexport_fs[452] [[ -n '' ]]
+epprd_rg:cl_unexport_fs[480] ALLNOERREXPORT=All_nonerror_exports
+epprd_rg:cl_unexport_fs[482] cl_RMupdate resource_down All_nonerror_exports cl_unexport_fs
2023-01-28T18:00:11.566842
2023-01-28T18:00:11.571199
+epprd_rg:cl_unexport_fs[484] exit 0
+epprd_rg:process_resources[unexport_filesystems:1608] return 0
+epprd_rg:process_resources[3600] RC=0
+epprd_rg:process_resources[3601] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[3603] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:11.584168 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=FILESYSTEMS ACTION=RELEASE FILE_SYSTEMS='"/usr/sap,/sapmnt,/oracle/EPP/sapdata4,/oracle/EPP/sapdata3,/oracle/EPP/sapdata2,/oracle/EPP/sapdata1,/oracle/EPP/origlogB,/oracle/EPP/origlogA,/oracle/EPP/oraarch,/oracle/EPP/mirrlogB,/oracle/EPP/mirrlogA,/oracle/EPP,/oracle,/board_org"' RESOURCE_GROUPS='"epprd_rg' '"' FSCHECK_TOOLS='""' RECOVERY_METHODS='"sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential"'
+epprd_rg:process_resources[1] JOB_TYPE=FILESYSTEMS
+epprd_rg:process_resources[1] ACTION=RELEASE
+epprd_rg:process_resources[1] FILE_SYSTEMS=/usr/sap,/sapmnt,/oracle/EPP/sapdata4,/oracle/EPP/sapdata3,/oracle/EPP/sapdata2,/oracle/EPP/sapdata1,/oracle/EPP/origlogB,/oracle/EPP/origlogA,/oracle/EPP/oraarch,/oracle/EPP/mirrlogB,/oracle/EPP/mirrlogA,/oracle/EPP,/oracle,/board_org
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] FSCHECK_TOOLS=''
+epprd_rg:process_resources[1] RECOVERY_METHODS=sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ FILESYSTEMS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ FILESYSTEMS == ONLINE ]]
+epprd_rg:process_resources[3482] process_file_systems RELEASE
+epprd_rg:process_resources[process_file_systems:2640] PS4_FUNC=process_file_systems
+epprd_rg:process_resources[process_file_systems:2640] typeset PS4_FUNC
+epprd_rg:process_resources[process_file_systems:2641] [[ high == high ]]
+epprd_rg:process_resources[process_file_systems:2641] set -x
+epprd_rg:process_resources[process_file_systems:2643] STAT=0
+epprd_rg:process_resources[process_file_systems:2645] [[ RELEASE == ACQUIRE ]]
+epprd_rg:process_resources[process_file_systems:2667] cl_deactivate_fs
+epprd_rg:cl_deactivate_fs[860] version=1.6
+epprd_rg:cl_deactivate_fs[863] STATUS=0
+epprd_rg:cl_deactivate_fs[863] typeset -li STATUS
+epprd_rg:cl_deactivate_fs[864] SLEEP=1
+epprd_rg:cl_deactivate_fs[864] typeset -li SLEEP
+epprd_rg:cl_deactivate_fs[865] LIMIT=60
+epprd_rg:cl_deactivate_fs[865] typeset -li LIMIT
+epprd_rg:cl_deactivate_fs[866] export SLEEP
+epprd_rg:cl_deactivate_fs[867] export LIMIT
+epprd_rg:cl_deactivate_fs[868] TMP_FILENAME=_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs[870] (( 0 != 0 ))
+epprd_rg:cl_deactivate_fs[875] OEM_CALL=false
+epprd_rg:cl_deactivate_fs[879] : Check here to see if the forced unmount option can be used
+epprd_rg:cl_deactivate_fs[881] FORCE_OK=''
+epprd_rg:cl_deactivate_fs[881] export FORCE_OK
+epprd_rg:cl_deactivate_fs[882] O_FlAG=''
+epprd_rg:cl_deactivate_fs[882] export O_FlAG
+epprd_rg:cl_deactivate_fs[885] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_deactivate_fs[886] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_deactivate_fs[887] : 99.99.999.999
+epprd_rg:cl_deactivate_fs[889] typeset -li V R M F
+epprd_rg:cl_deactivate_fs[890] typeset -Z2 R
+epprd_rg:cl_deactivate_fs[891] typeset -Z3 M
+epprd_rg:cl_deactivate_fs[892] typeset -Z3 F
+epprd_rg:cl_deactivate_fs[893] jfs2_lvl=601002000
+epprd_rg:cl_deactivate_fs[893] typeset -li jfs2_lvl
+epprd_rg:cl_deactivate_fs[894] fuser_lvl=601004000
+epprd_rg:cl_deactivate_fs[894] typeset -li fuser_lvl
+epprd_rg:cl_deactivate_fs[895] VRMF=0
+epprd_rg:cl_deactivate_fs[895] typeset -li VRMF
+epprd_rg:cl_deactivate_fs[898] : Here try and figure out what level of JFS2 is installed
+epprd_rg:cl_deactivate_fs[900] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_deactivate_fs[900] cut -f3 -d:
+epprd_rg:cl_deactivate_fs[900] read V R M F
+epprd_rg:cl_deactivate_fs[900] IFS=.
+epprd_rg:cl_deactivate_fs[901] VRMF=702005102
+epprd_rg:cl_deactivate_fs[903] (( 702005102 >= 601002000 ))
+epprd_rg:cl_deactivate_fs[906] : JFS2 at this level that supports forced unmount
+epprd_rg:cl_deactivate_fs[908] FORCE_OK=true
+epprd_rg:cl_deactivate_fs[911] (( 702005102 >= 601004000 ))
+epprd_rg:cl_deactivate_fs[914] : fuser at this level supports the -O flag
+epprd_rg:cl_deactivate_fs[916] O_FLAG=-O
+epprd_rg:cl_deactivate_fs[920] : if JOB_TYPE is set and is not GROUP, then process_resources is parent
+epprd_rg:cl_deactivate_fs[922] [[ FILESYSTEMS != 0 ]]
+epprd_rg:cl_deactivate_fs[922] [[ FILESYSTEMS != GROUP ]]
+epprd_rg:cl_deactivate_fs[923] deactivate_fs_process_resources
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:705] STATUS=0
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:705] typeset -li STATUS
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:708] : for the temp file, just take the first rg name
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:710] cut -f 1 -d ' '
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:710] print epprd_rg
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:710] read RES_GRP
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:711] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:714] : Remove the status file if already exists
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:716] rm -f /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:719] : go through all resource groups
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:721] pid_list=''
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:724] export GROUPNAME
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:725] export RECOVERY_METHOD
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:728] : Get a reverse sorted list of the filesystems in this RG so that they
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:729] : release in opposite order of mounting. This is needed for nested mounts.
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:731] print /usr/sap,/sapmnt,/oracle/EPP/sapdata4,/oracle/EPP/sapdata3,/oracle/EPP/sapdata2,/oracle/EPP/sapdata1,/oracle/EPP/origlogB,/oracle/EPP/origlogA,/oracle/EPP/oraarch,/oracle/EPP/mirrlogB,/oracle/EPP/mirrlogA,/oracle/EPP,/oracle,/board_org
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:731] read LIST_OF_FILE_SYSTEMS_FOR_RG FILE_SYSTEMS
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:731] IFS=:
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:732] tr , '\n'
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:732] print /usr/sap,/sapmnt,/oracle/EPP/sapdata4,/oracle/EPP/sapdata3,/oracle/EPP/sapdata2,/oracle/EPP/sapdata1,/oracle/EPP/origlogB,/oracle/EPP/origlogA,/oracle/EPP/oraarch,/oracle/EPP/mirrlogB,/oracle/EPP/mirrlogA,/oracle/EPP,/oracle,/board_org
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:732] sort -ru
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:732] LIST_OF_FILE_SYSTEMS_FOR_RG=$'/usr/sap\n/sapmnt\n/oracle/EPP/sapdata4\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata1\n/oracle/EPP/origlogB\n/oracle/EPP/origlogA\n/oracle/EPP/oraarch\n/oracle/EPP/mirrlogB\n/oracle/EPP/mirrlogA\n/oracle/EPP\n/oracle\n/board_org'
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:733] find_nested_mounts $'/usr/sap\n/sapmnt\n/oracle/EPP/sapdata4\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata1\n/oracle/EPP/origlogB\n/oracle/EPP/origlogA\n/oracle/EPP/oraarch\n/oracle/EPP/mirrlogB\n/oracle/EPP/mirrlogA\n/oracle/EPP\n/oracle\n/board_org'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:88] given_fs_list=$'/usr/sap\n/sapmnt\n/oracle/EPP/sapdata4\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata1\n/oracle/EPP/origlogB\n/oracle/EPP/origlogA\n/oracle/EPP/oraarch\n/oracle/EPP/mirrlogB\n/oracle/EPP/mirrlogA\n/oracle/EPP\n/oracle\n/board_org'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:88] typeset given_fs_list
+epprd_rg:cl_deactivate_fs[find_nested_mounts:90] typeset first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:91] mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:91] mount_out=$' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:91] typeset mount_out
+epprd_rg:cl_deactivate_fs[find_nested_mounts:92] discovered_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:92] typeset discovered_fs
+epprd_rg:cl_deactivate_fs[find_nested_mounts:93] typeset line fs nested_fs
+epprd_rg:cl_deactivate_fs[find_nested_mounts:94] typeset mounted_fs_list
+epprd_rg:cl_deactivate_fs[find_nested_mounts:96] fs_count=0
+epprd_rg:cl_deactivate_fs[find_nested_mounts:96] typeset -li fs_count
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /usr/sap
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=$' /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- $' /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 2'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 2 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] echo $' /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /usr/sap == /usr/sap/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:128] [[ jfs2 == /usr/sap/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print 'epdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /usr/sap/trans == /usr/sap/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /usr/sap/trans == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:128] [[ /usr/sap/trans == /usr/sap/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:128] [[ nfs3 == nfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:131] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:132] : exporting_node exported_file_system lower_mount_point vfs
+epprd_rg:cl_deactivate_fs[find_nested_mounts:133] : epdev /usr/sap/trans /usr/sap/trans nfs3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:135] nested_fs=/usr/sap/trans
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /usr/sap/trans ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /usr/sap/trans
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /sapmnt
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle/EPP
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=$' /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- $' /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 10'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 10 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] echo $' /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:128] [[ jfs2 == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/mirrlogA == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/mirrlogAlv /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/mirrlogB == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/mirrlogBlv /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/oraarch == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/oraarchlv /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/oraarch ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/origlogA == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/origlogAlv /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/origlogA ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/origlogB == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/origlogBlv /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/origlogB ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata1 == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata1lv /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata2 == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata2lv /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata2 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata3 == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata3lv /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata3 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata4 == /oracle/EPP/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata4lv /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata4 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /oracle
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=$' /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- $' /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 11'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 11 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] echo $' /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:128] [[ jfs2 == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/epplv /oracle/EPP
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/mirrlogA == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/mirrlogAlv /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/mirrlogB == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/mirrlogBlv /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/oraarch == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/oraarchlv /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/oraarch ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/origlogA == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/origlogAlv /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/origlogA ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/origlogB == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/origlogBlv /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/origlogB ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata1 == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata1lv /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata2 == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata2lv /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata2 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata3 == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata3lv /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata3 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:107] : The lines can be of one of two forms, depending on
+epprd_rg:cl_deactivate_fs[find_nested_mounts:108] : whether this is a local mount or an NFS mount
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] print '/dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:118] read first second third fourth rest
+epprd_rg:cl_deactivate_fs[find_nested_mounts:119] nested_fs=''
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ /oracle/EPP/sapdata4 == /oracle/* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:120] [[ jfs2 == jfs* ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:123] : The mount output is of the form
+epprd_rg:cl_deactivate_fs[find_nested_mounts:124] : lv_name lower_mount_point ...
+epprd_rg:cl_deactivate_fs[find_nested_mounts:125] : /dev/sapdata4lv /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs[find_nested_mounts:127] nested_fs=/oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs[find_nested_mounts:138] [[ -n /oracle/EPP/sapdata4 ]]
+epprd_rg:cl_deactivate_fs[find_nested_mounts:141] : Record new nested file system /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs[find_nested_mounts:143] discovered_fs=' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:104] read line
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] print -- $' node mounted mounted over vfs date options \n-------- --------------- --------------- ------ ------------ --------------- \n /dev/hd4 / jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd2 /usr jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd9var /var jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd3 /tmp jfs2 Nov 16 15:10 rw,log=/dev/hd8 \n /dev/hd1 /home jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/hd11admin /admin jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /proc /proc procfs Nov 16 15:11 rw \n /dev/hd10opt /opt jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/livedump /var/adm/ras/livedump jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /dev/ptflv /ptf jfs2 Nov 16 15:11 rw,log=/dev/hd8 \n /ahafs /aha ahafs Nov 16 15:11 rw \n /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraclelv /oracle jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/epplv /oracle/EPP jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogAlv /oracle/EPP/mirrlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/mirrlogBlv /oracle/EPP/mirrlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/oraarchlv /oracle/EPP/oraarch jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogAlv /oracle/EPP/origlogA jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/origlogBlv /oracle/EPP/origlogB jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata1lv /oracle/EPP/sapdata1 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata2lv /oracle/EPP/sapdata2 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata3lv /oracle/EPP/sapdata3 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapdata4lv /oracle/EPP/sapdata4 jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/sapmntlv /sapmnt jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\n /dev/saplv /usr/sap jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv\nepdev /sapcd /sapcd nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw\nepdev /usr/sap/trans /usr/sap/trans nfs3 Jan 28 17:37 bg,soft,intr,sec=sys,rw'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] grep -w /board_org
+epprd_rg:cl_deactivate_fs[find_nested_mounts:100] mounted_fs_list=' /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] print -- ' /dev/boardlv /board_org jfs2 Jan 28 17:10 rw,log=/dev/epprdaloglv'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] wc -l
+epprd_rg:cl_deactivate_fs[find_nested_mounts:101] fs_count=' 1'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:102] (( 1 > 1 ))
+epprd_rg:cl_deactivate_fs[find_nested_mounts:150] : Pass comprehensive list to stdout, sorted to get correct unmount order
+epprd_rg:cl_deactivate_fs[find_nested_mounts:152] print -- $'/usr/sap\n/sapmnt\n/oracle/EPP/sapdata4\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata1\n/oracle/EPP/origlogB\n/oracle/EPP/origlogA\n/oracle/EPP/oraarch\n/oracle/EPP/mirrlogB\n/oracle/EPP/mirrlogA\n/oracle/EPP\n/oracle\n/board_org' ' /usr/sap/trans /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:152] tr ' ' '\n'
+epprd_rg:cl_deactivate_fs[find_nested_mounts:152] sort -ru
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:733] LIST_OF_FILE_SYSTEMS_FOR_RG=$'/usr/sap/trans\n/usr/sap\n/sapmnt\n/oracle/EPP/sapdata4\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata1\n/oracle/EPP/origlogB\n/oracle/EPP/origlogA\n/oracle/EPP/oraarch\n/oracle/EPP/mirrlogB\n/oracle/EPP/mirrlogA\n/oracle/EPP\n/oracle\n/board_org'
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:736] : Get the recovery method used for all filesystems in this resource group
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:738] print sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:738] read RECOVERY_METHOD RECOVERY_METHODS
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:738] IFS=:
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:739] print sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:739] cut -f 1 -d ,
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:739] RECOVERY_METHOD=sequential
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:742] : verify the recovery method
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:744] RECOVERY_METHOD=sequential
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:745] RECOVERY_METHOD=sequential
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:747] [[ sequential != sequential ]]
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:754] : Tell the cluster manager what we are going to do
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:756] ALLFS=All_filesystems
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:757] cl_RMupdate resource_releasing All_filesystems cl_deactivate_fs
2023-01-28T18:00:11.872319
2023-01-28T18:00:11.876681
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:760] : now that all variables are set, perform the umounts
+epprd_rg:cl_deactivate_fs:/usr/sap/trans[deactivate_fs_process_resources:764] PS4_LOOP=/usr/sap/trans
+epprd_rg:cl_deactivate_fs:/usr/sap/trans[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/usr/sap/trans[deactivate_fs_process_resources:770] fs_umount /usr/sap/trans cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(0.282)[fs_umount:313] FS=/usr/sap/trans
+epprd_rg:cl_deactivate_fs(0.282)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(0.282)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(0.282)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(0.282)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(0.282)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(0.282)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(0.302)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(0.304)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(0.306)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/usr/sap/trans
+epprd_rg:cl_deactivate_fs(0.310)[fs_umount:332] fs_type=nfs3
+epprd_rg:cl_deactivate_fs(0.310)[fs_umount:333] [[ nfs3 == nfs* ]]
+epprd_rg:cl_deactivate_fs(0.310)[fs_umount:336] : unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(0.310)[fs_umount:338] umount /usr/sap/trans
+epprd_rg:cl_deactivate_fs(0.316)[fs_umount:358] : append status to the status file
+epprd_rg:cl_deactivate_fs(0.316)[fs_umount:360] print -- 0 /usr/sap/trans
+epprd_rg:cl_deactivate_fs(0.316)[fs_umount:360] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(0.316)[fs_umount:361] return 0
+epprd_rg:cl_deactivate_fs:/usr/sap[deactivate_fs_process_resources:764] PS4_LOOP=/usr/sap
+epprd_rg:cl_deactivate_fs:/usr/sap[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/usr/sap[deactivate_fs_process_resources:770] fs_umount /usr/sap cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(0.317)[fs_umount:313] FS=/usr/sap
+epprd_rg:cl_deactivate_fs(0.317)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(0.317)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(0.317)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(0.317)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(0.317)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(0.317)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(0.337)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(0.338)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(0.341)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/usr/sap
+epprd_rg:cl_deactivate_fs(0.345)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(0.345)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(0.345)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(0.345)[fs_umount:367] lsfs -c /usr/sap
+epprd_rg:cl_deactivate_fs(0.348)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/usr/sap:/dev/saplv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(0.348)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(0.349)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/usr/sap:/dev/saplv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(0.351)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(0.353)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(0.353)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(0.353)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(0.353)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(0.353)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(0.354)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(0.354)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(0.356)[fs_umount:394] awk '{ if ( $1 == "/dev/saplv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:394] FS_MOUNTED=/usr/sap
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:395] [[ -n /usr/sap ]]
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:397] [[ /usr/sap != /usr/sap ]]
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:409] [[ /usr/sap == / ]]
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:409] [[ /usr/sap == /usr ]]
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:409] [[ /usr/sap == /dev ]]
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:409] [[ /usr/sap == /proc ]]
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:409] [[ /usr/sap == /var ]]
+epprd_rg:cl_deactivate_fs(0.361)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/usr/sap'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:11.985683
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:11.985683|INFO: Deactivating Filesystem|/usr/sap'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(0.391)[fs_umount:427] : Try up to 60 times to unmount /usr/sap
+epprd_rg:cl_deactivate_fs(0.391)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(0.391)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(0.391)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(0.393)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:11.000
+epprd_rg:cl_deactivate_fs(0.393)[fs_umount:434] umount /usr/sap
+epprd_rg:cl_deactivate_fs(1.058)[fs_umount:437] : Unmount of /usr/sap worked. Can stop now.
+epprd_rg:cl_deactivate_fs(1.058)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(1.058)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(1.058)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/usr/sap'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:12.682468
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:12.682468|INFO: Deactivating Filesystem|/usr/sap'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.087)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(1.087)[fs_umount:687] print -- 0 /dev/saplv /usr/sap
+epprd_rg:cl_deactivate_fs(1.087)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.087)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/sapmnt[deactivate_fs_process_resources:764] PS4_LOOP=/sapmnt
+epprd_rg:cl_deactivate_fs:/sapmnt[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/sapmnt[deactivate_fs_process_resources:770] fs_umount /sapmnt cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.087)[fs_umount:313] FS=/sapmnt
+epprd_rg:cl_deactivate_fs(1.088)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(1.088)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(1.088)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(1.088)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.088)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(1.088)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(1.108)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(1.110)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/sapmnt
+epprd_rg:cl_deactivate_fs(1.110)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(1.114)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(1.114)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(1.114)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(1.114)[fs_umount:367] lsfs -c /sapmnt
+epprd_rg:cl_deactivate_fs(1.118)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/sapmnt:/dev/sapmntlv:jfs2:::20971520:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.118)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(1.119)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/sapmnt:/dev/sapmntlv:jfs2:::20971520:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.120)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(1.119)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(1.120)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(1.122)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(1.122)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(1.122)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(1.123)[fs_umount:394] awk '{ if ( $1 == "/dev/sapmntlv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(1.123)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(1.124)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:394] FS_MOUNTED=/sapmnt
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:395] [[ -n /sapmnt ]]
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:397] [[ /sapmnt != /sapmnt ]]
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:409] [[ /sapmnt == / ]]
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:409] [[ /sapmnt == /usr ]]
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:409] [[ /sapmnt == /dev ]]
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:409] [[ /sapmnt == /proc ]]
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:409] [[ /sapmnt == /var ]]
+epprd_rg:cl_deactivate_fs(1.128)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/sapmnt'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:12.751723
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:12.751723|INFO: Deactivating Filesystem|/sapmnt'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.157)[fs_umount:427] : Try up to 60 times to unmount /sapmnt
+epprd_rg:cl_deactivate_fs(1.157)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(1.157)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(1.157)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(1.159)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:12.000
+epprd_rg:cl_deactivate_fs(1.159)[fs_umount:434] umount /sapmnt
+epprd_rg:cl_deactivate_fs(1.381)[fs_umount:437] : Unmount of /sapmnt worked. Can stop now.
+epprd_rg:cl_deactivate_fs(1.381)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(1.381)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(1.381)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/sapmnt'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.006025
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.006025|INFO: Deactivating Filesystem|/sapmnt'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:687] print -- 0 /dev/sapmntlv /sapmnt
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata4[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata4[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata4[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/sapdata4 cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:313] FS=/oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(1.411)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(1.432)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(1.434)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs(1.434)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(1.438)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(1.438)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(1.438)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(1.438)[fs_umount:367] lsfs -c /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs(1.441)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata4:/dev/sapdata4lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.441)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(1.442)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata4:/dev/sapdata4lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.444)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(1.443)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(1.444)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(1.445)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(1.445)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(1.446)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(1.447)[fs_umount:394] awk '{ if ( $1 == "/dev/sapdata4lv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(1.447)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(1.447)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:394] FS_MOUNTED=/oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:395] [[ -n /oracle/EPP/sapdata4 ]]
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:397] [[ /oracle/EPP/sapdata4 != /oracle/EPP/sapdata4 ]]
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:409] [[ /oracle/EPP/sapdata4 == / ]]
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:409] [[ /oracle/EPP/sapdata4 == /usr ]]
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:409] [[ /oracle/EPP/sapdata4 == /dev ]]
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:409] [[ /oracle/EPP/sapdata4 == /proc ]]
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:409] [[ /oracle/EPP/sapdata4 == /var ]]
+epprd_rg:cl_deactivate_fs(1.452)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata4'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.075687
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.075687|INFO: Deactivating Filesystem|/oracle/EPP/sapdata4'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.481)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs(1.481)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(1.481)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(1.481)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(1.483)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:13.000
+epprd_rg:cl_deactivate_fs(1.483)[fs_umount:434] umount /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs(1.555)[fs_umount:437] : Unmount of /oracle/EPP/sapdata4 worked. Can stop now.
+epprd_rg:cl_deactivate_fs(1.555)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(1.555)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(1.555)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata4'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.179956
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.179956|INFO: Deactivating Filesystem|/oracle/EPP/sapdata4'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:687] print -- 0 /dev/sapdata4lv /oracle/EPP/sapdata4
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata3[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata3[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata3[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/sapdata3 cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:313] FS=/oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(1.585)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(1.606)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(1.608)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs(1.608)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(1.612)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(1.612)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(1.612)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(1.612)[fs_umount:367] lsfs -c /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs(1.615)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata3:/dev/sapdata3lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.615)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(1.616)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata3:/dev/sapdata3lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.617)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(1.618)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(1.618)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(1.619)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(1.619)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(1.619)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(1.621)[fs_umount:394] awk '{ if ( $1 == "/dev/sapdata3lv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(1.621)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(1.621)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:394] FS_MOUNTED=/oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:395] [[ -n /oracle/EPP/sapdata3 ]]
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:397] [[ /oracle/EPP/sapdata3 != /oracle/EPP/sapdata3 ]]
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:409] [[ /oracle/EPP/sapdata3 == / ]]
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:409] [[ /oracle/EPP/sapdata3 == /usr ]]
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:409] [[ /oracle/EPP/sapdata3 == /dev ]]
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:409] [[ /oracle/EPP/sapdata3 == /proc ]]
+epprd_rg:cl_deactivate_fs(1.625)[fs_umount:409] [[ /oracle/EPP/sapdata3 == /var ]]
+epprd_rg:cl_deactivate_fs(1.626)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata3'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.249333
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.249333|INFO: Deactivating Filesystem|/oracle/EPP/sapdata3'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.654)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs(1.654)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(1.654)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(1.654)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(1.657)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:13.000
+epprd_rg:cl_deactivate_fs(1.657)[fs_umount:434] umount /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs(1.729)[fs_umount:437] : Unmount of /oracle/EPP/sapdata3 worked. Can stop now.
+epprd_rg:cl_deactivate_fs(1.729)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(1.730)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(1.730)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata3'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.354256
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.354256|INFO: Deactivating Filesystem|/oracle/EPP/sapdata3'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:687] print -- 0 /dev/sapdata3lv /oracle/EPP/sapdata3
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata2[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata2[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata2[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/sapdata2 cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:313] FS=/oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(1.759)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(1.780)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(1.782)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs(1.782)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(1.787)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(1.787)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(1.787)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(1.787)[fs_umount:367] lsfs -c /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs(1.790)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata2:/dev/sapdata2lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.790)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(1.791)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata2:/dev/sapdata2lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.792)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(1.792)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(1.793)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(1.794)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(1.794)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(1.794)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(1.796)[fs_umount:394] awk '{ if ( $1 == "/dev/sapdata2lv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(1.796)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(1.796)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:394] FS_MOUNTED=/oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:395] [[ -n /oracle/EPP/sapdata2 ]]
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:397] [[ /oracle/EPP/sapdata2 != /oracle/EPP/sapdata2 ]]
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:409] [[ /oracle/EPP/sapdata2 == / ]]
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:409] [[ /oracle/EPP/sapdata2 == /usr ]]
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:409] [[ /oracle/EPP/sapdata2 == /dev ]]
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:409] [[ /oracle/EPP/sapdata2 == /proc ]]
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:409] [[ /oracle/EPP/sapdata2 == /var ]]
+epprd_rg:cl_deactivate_fs(1.800)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata2'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.424123
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.424123|INFO: Deactivating Filesystem|/oracle/EPP/sapdata2'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.829)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs(1.829)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(1.829)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(1.829)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(1.832)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:13.000
+epprd_rg:cl_deactivate_fs(1.832)[fs_umount:434] umount /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs(1.903)[fs_umount:437] : Unmount of /oracle/EPP/sapdata2 worked. Can stop now.
+epprd_rg:cl_deactivate_fs(1.904)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(1.904)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(1.904)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata2'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.528195
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.528195|INFO: Deactivating Filesystem|/oracle/EPP/sapdata2'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:687] print -- 0 /dev/sapdata2lv /oracle/EPP/sapdata2
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata1[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata1[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/sapdata1[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/sapdata1 cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:313] FS=/oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(1.933)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(1.954)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(1.954)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(1.955)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(1.956)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs(1.956)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(1.961)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(1.961)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(1.961)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(1.961)[fs_umount:367] lsfs -c /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs(1.964)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata1:/dev/sapdata1lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.964)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(1.965)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata1:/dev/sapdata1lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(1.966)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(1.967)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(1.967)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(1.968)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(1.968)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(1.968)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(1.970)[fs_umount:394] awk '{ if ( $1 == "/dev/sapdata1lv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(1.970)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(1.970)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:394] FS_MOUNTED=/oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:395] [[ -n /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:397] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:409] [[ /oracle/EPP/sapdata1 == / ]]
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:409] [[ /oracle/EPP/sapdata1 == /usr ]]
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:409] [[ /oracle/EPP/sapdata1 == /dev ]]
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:409] [[ /oracle/EPP/sapdata1 == /proc ]]
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:409] [[ /oracle/EPP/sapdata1 == /var ]]
+epprd_rg:cl_deactivate_fs(1.974)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.597460
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.597460|INFO: Deactivating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.002)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs(2.002)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(2.002)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(2.002)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(2.005)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:13.000
+epprd_rg:cl_deactivate_fs(2.005)[fs_umount:434] umount /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs(2.075)[fs_umount:437] : Unmount of /oracle/EPP/sapdata1 worked. Can stop now.
+epprd_rg:cl_deactivate_fs(2.075)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(2.076)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(2.076)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.699550
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.699550|INFO: Deactivating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.104)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(2.104)[fs_umount:687] print -- 0 /dev/sapdata1lv /oracle/EPP/sapdata1
+epprd_rg:cl_deactivate_fs(2.104)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.104)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/origlogB[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs:/oracle/EPP/origlogB[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/origlogB[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/origlogB cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.105)[fs_umount:313] FS=/oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs(2.105)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(2.105)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(2.105)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(2.105)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.105)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(2.105)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(2.125)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(2.127)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs(2.127)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(2.131)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(2.131)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(2.131)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(2.131)[fs_umount:367] lsfs -c /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs(2.134)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogB:/dev/origlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.134)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(2.135)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogB:/dev/origlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.136)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(2.137)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(2.137)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(2.138)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(2.138)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(2.138)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(2.140)[fs_umount:394] awk '{ if ( $1 == "/dev/origlogBlv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(2.140)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(2.140)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:394] FS_MOUNTED=/oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:395] [[ -n /oracle/EPP/origlogB ]]
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:397] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:409] [[ /oracle/EPP/origlogB == / ]]
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:409] [[ /oracle/EPP/origlogB == /usr ]]
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:409] [[ /oracle/EPP/origlogB == /dev ]]
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:409] [[ /oracle/EPP/origlogB == /proc ]]
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:409] [[ /oracle/EPP/origlogB == /var ]]
+epprd_rg:cl_deactivate_fs(2.144)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.767937
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.767937|INFO: Deactivating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.173)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs(2.173)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(2.173)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(2.173)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(2.176)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:13.000
+epprd_rg:cl_deactivate_fs(2.176)[fs_umount:434] umount /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs(2.245)[fs_umount:437] : Unmount of /oracle/EPP/origlogB worked. Can stop now.
+epprd_rg:cl_deactivate_fs(2.245)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(2.246)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(2.246)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.869621
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.869621|INFO: Deactivating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.274)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(2.274)[fs_umount:687] print -- 0 /dev/origlogBlv /oracle/EPP/origlogB
+epprd_rg:cl_deactivate_fs(2.274)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/origlogA[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs:/oracle/EPP/origlogA[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/origlogA[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/origlogA cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:313] FS=/oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(2.275)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(2.295)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(2.297)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs(2.297)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(2.301)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(2.301)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(2.301)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(2.301)[fs_umount:367] lsfs -c /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs(2.304)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogA:/dev/origlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.304)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(2.305)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogA:/dev/origlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.306)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(2.307)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(2.307)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(2.308)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(2.308)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(2.308)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(2.310)[fs_umount:394] awk '{ if ( $1 == "/dev/origlogAlv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(2.310)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(2.310)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(2.314)[fs_umount:394] FS_MOUNTED=/oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs(2.314)[fs_umount:395] [[ -n /oracle/EPP/origlogA ]]
+epprd_rg:cl_deactivate_fs(2.314)[fs_umount:397] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:cl_deactivate_fs(2.314)[fs_umount:409] [[ /oracle/EPP/origlogA == / ]]
+epprd_rg:cl_deactivate_fs(2.315)[fs_umount:409] [[ /oracle/EPP/origlogA == /usr ]]
+epprd_rg:cl_deactivate_fs(2.315)[fs_umount:409] [[ /oracle/EPP/origlogA == /dev ]]
+epprd_rg:cl_deactivate_fs(2.315)[fs_umount:409] [[ /oracle/EPP/origlogA == /proc ]]
+epprd_rg:cl_deactivate_fs(2.315)[fs_umount:409] [[ /oracle/EPP/origlogA == /var ]]
+epprd_rg:cl_deactivate_fs(2.315)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:13.938250
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:13.938250|INFO: Deactivating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.343)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs(2.343)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(2.343)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(2.343)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(2.346)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:13.000
+epprd_rg:cl_deactivate_fs(2.346)[fs_umount:434] umount /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs(2.416)[fs_umount:437] : Unmount of /oracle/EPP/origlogA worked. Can stop now.
+epprd_rg:cl_deactivate_fs(2.416)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(2.416)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(2.416)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.039986
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.039986|INFO: Deactivating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:687] print -- 0 /dev/origlogAlv /oracle/EPP/origlogA
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/oraarch[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs:/oracle/EPP/oraarch[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/oraarch[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/oraarch cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:313] FS=/oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(2.445)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(2.466)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(2.468)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs(2.468)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(2.472)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(2.472)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(2.472)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(2.472)[fs_umount:367] lsfs -c /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs(2.475)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/oraarch:/dev/oraarchlv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.475)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(2.476)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/oraarch:/dev/oraarchlv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.477)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(2.478)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(2.478)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(2.479)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(2.479)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(2.479)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(2.481)[fs_umount:394] awk '{ if ( $1 == "/dev/oraarchlv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(2.481)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(2.481)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:394] FS_MOUNTED=/oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:395] [[ -n /oracle/EPP/oraarch ]]
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:397] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:409] [[ /oracle/EPP/oraarch == / ]]
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:409] [[ /oracle/EPP/oraarch == /usr ]]
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:409] [[ /oracle/EPP/oraarch == /dev ]]
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:409] [[ /oracle/EPP/oraarch == /proc ]]
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:409] [[ /oracle/EPP/oraarch == /var ]]
+epprd_rg:cl_deactivate_fs(2.485)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.108880
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.108880|INFO: Deactivating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.514)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs(2.514)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(2.514)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(2.514)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(2.517)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:14.000
+epprd_rg:cl_deactivate_fs(2.517)[fs_umount:434] umount /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs(2.587)[fs_umount:437] : Unmount of /oracle/EPP/oraarch worked. Can stop now.
+epprd_rg:cl_deactivate_fs(2.587)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(2.587)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(2.587)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.211239
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.211239|INFO: Deactivating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:687] print -- 0 /dev/oraarchlv /oracle/EPP/oraarch
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/mirrlogB[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs:/oracle/EPP/mirrlogB[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/mirrlogB[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/mirrlogB cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:313] FS=/oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(2.616)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(2.637)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(2.639)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs(2.639)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(2.643)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(2.643)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(2.643)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(2.643)[fs_umount:367] lsfs -c /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs(2.646)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogB:/dev/mirrlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.646)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(2.647)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogB:/dev/mirrlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.648)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(2.649)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(2.649)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(2.650)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(2.650)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(2.650)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(2.652)[fs_umount:394] awk '{ if ( $1 == "/dev/mirrlogBlv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(2.652)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(2.652)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:394] FS_MOUNTED=/oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:395] [[ -n /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:397] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:409] [[ /oracle/EPP/mirrlogB == / ]]
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:409] [[ /oracle/EPP/mirrlogB == /usr ]]
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:409] [[ /oracle/EPP/mirrlogB == /dev ]]
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:409] [[ /oracle/EPP/mirrlogB == /proc ]]
+epprd_rg:cl_deactivate_fs(2.656)[fs_umount:409] [[ /oracle/EPP/mirrlogB == /var ]]
+epprd_rg:cl_deactivate_fs(2.657)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.279998
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.279998|INFO: Deactivating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.685)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs(2.685)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(2.685)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(2.685)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(2.688)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:14.000
+epprd_rg:cl_deactivate_fs(2.688)[fs_umount:434] umount /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs(2.757)[fs_umount:437] : Unmount of /oracle/EPP/mirrlogB worked. Can stop now.
+epprd_rg:cl_deactivate_fs(2.757)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(2.757)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(2.757)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.381437
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.381437|INFO: Deactivating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:687] print -- 0 /dev/mirrlogBlv /oracle/EPP/mirrlogB
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP/mirrlogA[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs:/oracle/EPP/mirrlogA[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP/mirrlogA[deactivate_fs_process_resources:770] fs_umount /oracle/EPP/mirrlogA cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:313] FS=/oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(2.786)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(2.787)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.787)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(2.787)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(2.807)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(2.809)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs(2.809)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(2.813)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(2.813)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(2.813)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(2.813)[fs_umount:367] lsfs -c /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs(2.816)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogA:/dev/mirrlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.816)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(2.817)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogA:/dev/mirrlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.818)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(2.819)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(2.819)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(2.820)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(2.820)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(2.820)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(2.822)[fs_umount:394] awk '{ if ( $1 == "/dev/mirrlogAlv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(2.822)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(2.822)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:394] FS_MOUNTED=/oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:395] [[ -n /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:397] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:409] [[ /oracle/EPP/mirrlogA == / ]]
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:409] [[ /oracle/EPP/mirrlogA == /usr ]]
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:409] [[ /oracle/EPP/mirrlogA == /dev ]]
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:409] [[ /oracle/EPP/mirrlogA == /proc ]]
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:409] [[ /oracle/EPP/mirrlogA == /var ]]
+epprd_rg:cl_deactivate_fs(2.826)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.449939
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.449939|INFO: Deactivating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.855)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs(2.855)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(2.855)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(2.855)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(2.858)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:14.000
+epprd_rg:cl_deactivate_fs(2.858)[fs_umount:434] umount /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs(2.928)[fs_umount:437] : Unmount of /oracle/EPP/mirrlogA worked. Can stop now.
+epprd_rg:cl_deactivate_fs(2.928)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(2.928)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(2.928)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.551858
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.551858|INFO: Deactivating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:687] print -- 0 /dev/mirrlogAlv /oracle/EPP/mirrlogA
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle/EPP[deactivate_fs_process_resources:764] PS4_LOOP=/oracle/EPP
+epprd_rg:cl_deactivate_fs:/oracle/EPP[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle/EPP[deactivate_fs_process_resources:770] fs_umount /oracle/EPP cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:313] FS=/oracle/EPP
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(2.957)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(2.977)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(2.979)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle/EPP
+epprd_rg:cl_deactivate_fs(2.979)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(2.983)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(2.983)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(2.983)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(2.984)[fs_umount:367] lsfs -c /oracle/EPP
+epprd_rg:cl_deactivate_fs(2.987)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP:/dev/epplv:jfs2:::62914560:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.987)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(2.988)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP:/dev/epplv:jfs2:::62914560:rw:no:no'
+epprd_rg:cl_deactivate_fs(2.989)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(2.989)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(2.989)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(2.991)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(2.991)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(2.991)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(2.992)[fs_umount:394] awk '{ if ( $1 == "/dev/epplv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(2.992)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(2.993)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:394] FS_MOUNTED=/oracle/EPP
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:395] [[ -n /oracle/EPP ]]
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:397] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:409] [[ /oracle/EPP == / ]]
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:409] [[ /oracle/EPP == /usr ]]
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:409] [[ /oracle/EPP == /dev ]]
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:409] [[ /oracle/EPP == /proc ]]
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:409] [[ /oracle/EPP == /var ]]
+epprd_rg:cl_deactivate_fs(2.997)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:14.620414
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:14.620414|INFO: Deactivating Filesystem|/oracle/EPP'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(3.025)[fs_umount:427] : Try up to 60 times to unmount /oracle/EPP
+epprd_rg:cl_deactivate_fs(3.025)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(3.025)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(3.025)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(3.028)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:14.000
+epprd_rg:cl_deactivate_fs(3.028)[fs_umount:434] umount /oracle/EPP
+epprd_rg:cl_deactivate_fs(3.435)[fs_umount:437] : Unmount of /oracle/EPP worked. Can stop now.
+epprd_rg:cl_deactivate_fs(3.435)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(3.435)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(3.435)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle/EPP'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:15.059397
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:15.059397|INFO: Deactivating Filesystem|/oracle/EPP'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:687] print -- 0 /dev/epplv /oracle/EPP
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/oracle[deactivate_fs_process_resources:764] PS4_LOOP=/oracle
+epprd_rg:cl_deactivate_fs:/oracle[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/oracle[deactivate_fs_process_resources:770] fs_umount /oracle cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:313] FS=/oracle
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(3.464)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(3.485)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(3.487)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/oracle
+epprd_rg:cl_deactivate_fs(3.487)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(3.491)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(3.491)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(3.491)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(3.491)[fs_umount:367] lsfs -c /oracle
+epprd_rg:cl_deactivate_fs(3.494)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle:/dev/oraclelv:jfs2:::41943040:rw:no:no'
+epprd_rg:cl_deactivate_fs(3.494)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(3.495)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle:/dev/oraclelv:jfs2:::41943040:rw:no:no'
+epprd_rg:cl_deactivate_fs(3.496)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(3.497)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(3.497)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(3.498)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(3.498)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(3.498)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(3.500)[fs_umount:394] awk '{ if ( $1 == "/dev/oraclelv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(3.500)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(3.500)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:394] FS_MOUNTED=/oracle
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:395] [[ -n /oracle ]]
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:397] [[ /oracle != /oracle ]]
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:409] [[ /oracle == / ]]
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:409] [[ /oracle == /usr ]]
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:409] [[ /oracle == /dev ]]
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:409] [[ /oracle == /proc ]]
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:409] [[ /oracle == /var ]]
+epprd_rg:cl_deactivate_fs(3.504)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/oracle'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:15.127817
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:15.127817|INFO: Deactivating Filesystem|/oracle'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(3.533)[fs_umount:427] : Try up to 60 times to unmount /oracle
+epprd_rg:cl_deactivate_fs(3.533)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(3.533)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(3.533)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(3.535)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:15.000
+epprd_rg:cl_deactivate_fs(3.536)[fs_umount:434] umount /oracle
+epprd_rg:cl_deactivate_fs(3.609)[fs_umount:437] : Unmount of /oracle worked. Can stop now.
+epprd_rg:cl_deactivate_fs(3.609)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(3.609)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(3.609)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/oracle'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:15.233560
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:15.233560|INFO: Deactivating Filesystem|/oracle'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(3.638)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(3.638)[fs_umount:687] print -- 0 /dev/oraclelv /oracle
+epprd_rg:cl_deactivate_fs(3.638)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(3.638)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/board_org[deactivate_fs_process_resources:764] PS4_LOOP=/board_org
+epprd_rg:cl_deactivate_fs:/board_org[deactivate_fs_process_resources:765] [[ sequential == parallel ]]
+epprd_rg:cl_deactivate_fs:/board_org[deactivate_fs_process_resources:770] fs_umount /board_org cl_deactivate_fs epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(3.639)[fs_umount:313] FS=/board_org
+epprd_rg:cl_deactivate_fs(3.639)[fs_umount:313] typeset FS
+epprd_rg:cl_deactivate_fs(3.639)[fs_umount:314] PROGNAME=cl_deactivate_fs
+epprd_rg:cl_deactivate_fs(3.639)[fs_umount:314] typeset PROGNAME
+epprd_rg:cl_deactivate_fs(3.639)[fs_umount:315] TMP_FILENAME=epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(3.639)[fs_umount:315] typeset TMP_FILENAME
+epprd_rg:cl_deactivate_fs(3.639)[fs_umount:316] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:316] WPAR_ROOT=''
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:316] typeset WPAR_ROOT
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:317] STATUS=0
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:317] typeset -li STATUS
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:318] typeset lv
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:319] typeset fs_type
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:320] typeset count
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:321] typeset line
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:322] RC=0
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:322] typeset -li RC
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:323] typeset pid
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:324] typeset pidlist
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:325] typeset lv_lsfs
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:326] disable_procfile_debug=false
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:326] typeset disable_procfile_debug
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:327] typeset crossmount_rg
+epprd_rg:cl_deactivate_fs(3.659)[fs_umount:330] : Fetch filesystem type and unmount nfs filesystem
+epprd_rg:cl_deactivate_fs(3.661)[fs_umount:332] awk '$3==FILESYS && $4~"^nfs."{print $4}' FILESYS=/board_org
+epprd_rg:cl_deactivate_fs(3.661)[fs_umount:332] mount
+epprd_rg:cl_deactivate_fs(3.665)[fs_umount:332] fs_type=''
+epprd_rg:cl_deactivate_fs(3.665)[fs_umount:333] [[ '' == nfs* ]]
+epprd_rg:cl_deactivate_fs(3.665)[fs_umount:365] : Get the logical volume associated with the filesystem
+epprd_rg:cl_deactivate_fs(3.665)[fs_umount:367] lsfs -c /board_org
+epprd_rg:cl_deactivate_fs(3.668)[fs_umount:367] lv_lsfs=$'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(3.669)[fs_umount:382] : Get the logical volume name and filesystem type
+epprd_rg:cl_deactivate_fs(3.670)[fs_umount:384] print $'#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_deactivate_fs(3.671)[fs_umount:384] tail -1
+epprd_rg:cl_deactivate_fs(3.671)[fs_umount:384] read skip lv fs_type rest
+epprd_rg:cl_deactivate_fs(3.671)[fs_umount:384] IFS=:
+epprd_rg:cl_deactivate_fs(3.672)[fs_umount:387] : For WPARs, find the real file system name
+epprd_rg:cl_deactivate_fs(3.673)[fs_umount:389] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs(3.673)[fs_umount:392] : Check to see if filesystem is mounted.
+epprd_rg:cl_deactivate_fs(3.674)[fs_umount:394] awk '{ if ( $1 == "/dev/boardlv" ) print $2 }'
+epprd_rg:cl_deactivate_fs(3.674)[fs_umount:394] mount
+epprd_rg:cl_deactivate_fs(3.674)[fs_umount:394] LC_ALL=C
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:394] FS_MOUNTED=/board_org
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:395] [[ -n /board_org ]]
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:397] [[ /board_org != /board_org ]]
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:409] [[ /board_org == / ]]
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:409] [[ /board_org == /usr ]]
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:409] [[ /board_org == /dev ]]
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:409] [[ /board_org == /proc ]]
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:409] [[ /board_org == /var ]]
+epprd_rg:cl_deactivate_fs(3.679)[fs_umount:425] amlog_trace '' 'Deactivating Filesystem|/board_org'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:15.302315
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:15.302315|INFO: Deactivating Filesystem|/board_org'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(3.707)[fs_umount:427] : Try up to 60 times to unmount /board_org
+epprd_rg:cl_deactivate_fs(3.707)[fs_umount:429] (( count=1))
+epprd_rg:cl_deactivate_fs(3.707)[fs_umount:429] (( count <= 60))
+epprd_rg:cl_deactivate_fs(3.707)[fs_umount:432] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_deactivate_fs(3.710)[fs_umount:432] : Attempt 1 of 60 to unmount at Jan 28 18:00:15.000
+epprd_rg:cl_deactivate_fs(3.710)[fs_umount:434] umount /board_org
+epprd_rg:cl_deactivate_fs(3.780)[fs_umount:437] : Unmount of /board_org worked. Can stop now.
+epprd_rg:cl_deactivate_fs(3.780)[fs_umount:439] break
+epprd_rg:cl_deactivate_fs(3.780)[fs_umount:672] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_fs(3.780)[fs_umount:676] amlog_trace '' 'Deactivating Filesystem|/board_org'
+epprd_rg:cl_deactivate_fs[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_fs[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_fs[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_fs[amlog_trace:319] DATE=2023-01-28T18:00:15.403937
+epprd_rg:cl_deactivate_fs[amlog_trace:320] echo '|2023-01-28T18:00:15.403937|INFO: Deactivating Filesystem|/board_org'
+epprd_rg:cl_deactivate_fs[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_fs(3.809)[fs_umount:685] : append status to the status file
+epprd_rg:cl_deactivate_fs(3.809)[fs_umount:687] print -- 0 /dev/boardlv /board_org
+epprd_rg:cl_deactivate_fs(3.809)[fs_umount:687] 1>> /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs(3.809)[fs_umount:691] return 0
+epprd_rg:cl_deactivate_fs:/board_org[deactivate_fs_process_resources:773] unset PS4_LOOP
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:777] [[ -n '' ]]
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:786] ALLNOERROR=All_non_error_filesystems
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:788] : update resource manager
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:790] cl_RMupdate resource_down All_non_error_filesystems cl_deactivate_fs
2023-01-28T18:00:15.426997
2023-01-28T18:00:15.431462
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:794] : Check to see how the unmounts went
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:796] [[ -s /tmp/epprd_rg_deactivate_fs.tmp ]]
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:798] grep -qw ^1 /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:805] grep -qw ^11 /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:814] : All unmounts successful
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:816] STATUS=0
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:817] rm -f /tmp/epprd_rg_deactivate_fs.tmp
+epprd_rg:cl_deactivate_fs[deactivate_fs_process_resources:821] return 0
+epprd_rg:cl_deactivate_fs[924] exit 0
+epprd_rg:process_resources[process_file_systems:2668] RC=0
+epprd_rg:process_resources[process_file_systems:2669] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[process_file_systems:2681] (( 0 != 0 ))
+epprd_rg:process_resources[process_file_systems:2687] return 0
+epprd_rg:process_resources[3483] RC=0
+epprd_rg:process_resources[3485] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[3487] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:15.453718 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=VGS ACTION=RELEASE VOLUME_GROUPS='"datavg"' RESOURCE_GROUPS='"epprd_rg' '"' EXPORT_FILESYSTEM='"TRUE"'
+epprd_rg:process_resources[1] JOB_TYPE=VGS
+epprd_rg:process_resources[1] ACTION=RELEASE
+epprd_rg:process_resources[1] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] EXPORT_FILESYSTEM=TRUE
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ VGS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ VGS == ONLINE ]]
+epprd_rg:process_resources[3571] process_volume_groups_main RELEASE
+epprd_rg:process_resources[process_volume_groups_main:2293] PS4_FUNC=process_volume_groups_main
+epprd_rg:process_resources[process_volume_groups_main:2293] typeset PS4_FUNC
+epprd_rg:process_resources[process_volume_groups_main:2294] [[ high == high ]]
+epprd_rg:process_resources[process_volume_groups_main:2294] set -x
+epprd_rg:process_resources[process_volume_groups_main:2295] DEF_VARYON_ACTION=0
+epprd_rg:process_resources[process_volume_groups_main:2295] typeset -li DEF_VARYON_ACTION
+epprd_rg:process_resources[process_volume_groups_main:2296] FAILURE_IN_METHOD=0
+epprd_rg:process_resources[process_volume_groups_main:2296] typeset -li FAILURE_IN_METHOD
+epprd_rg:process_resources[process_volume_groups_main:2297] ACTION=RELEASE
+epprd_rg:process_resources[process_volume_groups_main:2297] typeset ACTION
+epprd_rg:process_resources[process_volume_groups_main:2298] STAT=0
+epprd_rg:process_resources[process_volume_groups_main:2299] VG_LIST=datavg
+epprd_rg:process_resources[process_volume_groups_main:2300] RG_LIST=epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2304] getReplicatedResources epprd_rg
+epprd_rg:process_resources[getReplicatedResources:699] PS4_FUNC=getReplicatedResources
+epprd_rg:process_resources[getReplicatedResources:699] typeset PS4_FUNC
+epprd_rg:process_resources[getReplicatedResources:700] [[ high == high ]]
+epprd_rg:process_resources[getReplicatedResources:700] set -x
+epprd_rg:process_resources[getReplicatedResources:702] RV=false
+epprd_rg:process_resources[getReplicatedResources:704] clodmget -n -f type HACMPrresmethods
+epprd_rg:process_resources[getReplicatedResources:704] [[ -n 9 ]]
+epprd_rg:process_resources[getReplicatedResources:707] : Replicated resource methods are defined, check for resources
+epprd_rg:process_resources[getReplicatedResources:709] clodmget -q $'name like \'*_REP_RESOURCE\' AND group=epprd_rg' -f value -n HACMPresource
+epprd_rg:process_resources[getReplicatedResources:709] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:718] : Verify if any backup profiles are configured and trigger cbm utilities based on that
+epprd_rg:process_resources[getReplicatedResources:720] clodmget -q name=BACKUP_ENABLED -f value HACMPresource
+epprd_rg:process_resources[getReplicatedResources:720] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:739] echo false
+epprd_rg:process_resources[process_volume_groups_main:2304] REPLICATED_RESOURCES=false
+epprd_rg:process_resources[process_volume_groups_main:2305] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2306] print -- datavg
+epprd_rg:process_resources[process_volume_groups_main:2306] read VOLUME_GROUPS VG_LIST
+epprd_rg:process_resources[process_volume_groups_main:2306] IFS=:
+epprd_rg:process_resources[process_volume_groups_main:2307] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[process_volume_groups_main:2310] : At this point, these variables contain information only for epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2312] export VOLUME_GROUPS
+epprd_rg:process_resources[process_volume_groups_main:2313] export RESOURCE_GROUPS
+epprd_rg:process_resources[process_volume_groups_main:2315] [[ false == true ]]
+epprd_rg:process_resources[process_volume_groups_main:2555] process_volume_groups RELEASE
+epprd_rg:process_resources[process_volume_groups:2571] PS4_FUNC=process_volume_groups
+epprd_rg:process_resources[process_volume_groups:2571] typeset PS4_FUNC
+epprd_rg:process_resources[process_volume_groups:2572] [[ high == high ]]
+epprd_rg:process_resources[process_volume_groups:2572] set -x
+epprd_rg:process_resources[process_volume_groups:2573] STAT=0
+epprd_rg:process_resources[process_volume_groups:2575] GROUPNAME=epprd_rg
+epprd_rg:process_resources[process_volume_groups:2575] export GROUPNAME
+epprd_rg:process_resources[process_volume_groups:2578] [[ RELEASE == ACQUIRE ]]
+epprd_rg:process_resources[process_volume_groups:2603] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[process_volume_groups:2605] cl_deactivate_vgs -n
+epprd_rg:cl_deactivate_vgs[458] version=%I%
+epprd_rg:cl_deactivate_vgs[461] STATUS=0
+epprd_rg:cl_deactivate_vgs[461] typeset -li STATUS
+epprd_rg:cl_deactivate_vgs[462] TMP_VARYOFF_STATUS=/tmp/_deactivate_vgs.tmp
+epprd_rg:cl_deactivate_vgs[463] sddsrv_off=FALSE
+epprd_rg:cl_deactivate_vgs[464] ALLVGS=All_volume_groups
+epprd_rg:cl_deactivate_vgs[465] OEM_CALL=false
+epprd_rg:cl_deactivate_vgs[467] (( 1 != 0 ))
+epprd_rg:cl_deactivate_vgs[467] [[ -n == -c ]]
+epprd_rg:cl_deactivate_vgs[476] EVENT_TYPE=RELEASE_PRIMARY
+epprd_rg:cl_deactivate_vgs[477] EVENT_TYPE=RELEASE_PRIMARY
+epprd_rg:cl_deactivate_vgs[480] : if JOB_TYPE is set and is not $'\'GROUP\',' then process_resources is parent
+epprd_rg:cl_deactivate_vgs[482] [[ VGS != 0 ]]
+epprd_rg:cl_deactivate_vgs[482] [[ VGS != GROUP ]]
+epprd_rg:cl_deactivate_vgs[485] : parameters passed from process_resources thru environment
+epprd_rg:cl_deactivate_vgs[487] PROC_RES=true
+epprd_rg:cl_deactivate_vgs[501] : set -u will report an error if any variable used in the script is not set
+epprd_rg:cl_deactivate_vgs[503] set -u
+epprd_rg:cl_deactivate_vgs[506] : Remove the status file if it currently exists
+epprd_rg:cl_deactivate_vgs[508] rm -f /tmp/_deactivate_vgs.tmp
+epprd_rg:cl_deactivate_vgs[511] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_deactivate_vgs[512] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_deactivate_vgs[513] : 99.99.999.999
+epprd_rg:cl_deactivate_vgs[515] typeset -li V R M F
+epprd_rg:cl_deactivate_vgs[516] typeset -Z2 R
+epprd_rg:cl_deactivate_vgs[517] typeset -Z3 M
+epprd_rg:cl_deactivate_vgs[518] typeset -Z3 F
+epprd_rg:cl_deactivate_vgs[519] VRMF=0
+epprd_rg:cl_deactivate_vgs[519] typeset -li VRMF
+epprd_rg:cl_deactivate_vgs[528] ls '/dev/vpath*'
+epprd_rg:cl_deactivate_vgs[528] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_vgs[595] : Special processing for 2-node NFS clusters
+epprd_rg:cl_deactivate_vgs[597] TWO_NODE_CLUSTER=FALSE
+epprd_rg:cl_deactivate_vgs[597] export TWO_NODE_CLUSTER
+epprd_rg:cl_deactivate_vgs[598] FS_TYPES='jsf2?log'
+epprd_rg:cl_deactivate_vgs[598] export FS_TYPES
+epprd_rg:cl_deactivate_vgs[599] wc -l
+epprd_rg:cl_deactivate_vgs[599] clodmget -q 'object = VERBOSE_LOGGING' -f name -n HACMPnode
+epprd_rg:cl_deactivate_vgs[599] (( 2 == 2 ))
+epprd_rg:cl_deactivate_vgs[600] [[ -n TRUE ]]
+epprd_rg:cl_deactivate_vgs[602] : two nodes, with exported filesystems
+epprd_rg:cl_deactivate_vgs[603] TWO_NODE_CLUSTER=TRUE
+epprd_rg:cl_deactivate_vgs[603] export TWO_NODE_CLUSTER
+epprd_rg:cl_deactivate_vgs[607] : Pick up a list of currently varyd on volume groups
+epprd_rg:cl_deactivate_vgs[609] lsvg -L -o
+epprd_rg:cl_deactivate_vgs[609] 2> /tmp/lsvg.err
+epprd_rg:cl_deactivate_vgs[609] VG_ON_LIST=$'datavg\ncaavg_private\nrootvg'
+epprd_rg:cl_deactivate_vgs[612] : if not called from process_resources, use old-style environment and parameters
+epprd_rg:cl_deactivate_vgs[614] [[ true == false ]]
+epprd_rg:cl_deactivate_vgs[672] : Called from process_resources
+epprd_rg:cl_deactivate_vgs[674] LIST_OF_VOLUME_GROUPS_FOR_RG=''
+epprd_rg:cl_deactivate_vgs[679] export GROUPNAME
+epprd_rg:cl_deactivate_vgs[681] : Discover the volume groups for this resource group.
+epprd_rg:cl_deactivate_vgs[686] echo datavg
+epprd_rg:cl_deactivate_vgs[686] read LIST_OF_VOLUME_GROUPS_FOR_RG VOLUME_GROUPS
+epprd_rg:cl_deactivate_vgs[686] IFS=:
+epprd_rg:cl_deactivate_vgs[689] : Reverse the order, so that VGs release in reverse order of acquisition
+epprd_rg:cl_deactivate_vgs[693] sed 's/ /,/g'
+epprd_rg:cl_deactivate_vgs[693] echo datavg
+epprd_rg:cl_deactivate_vgs[693] LIST_OF_COMMASEP_VG_FOR_RG=datavg
+epprd_rg:cl_deactivate_vgs[694] echo datavg
+epprd_rg:cl_deactivate_vgs[695] tr , '\n'
+epprd_rg:cl_deactivate_vgs[695] egrep -v -w $'rootvg|caavg_private\n |altinst_rootvg|old_rootvg'
+epprd_rg:cl_deactivate_vgs[696] sort -ru
+epprd_rg:cl_deactivate_vgs[694] LIST_OF_VOLUME_GROUPS_FOR_RG=datavg
+epprd_rg:cl_deactivate_vgs[698] : Update Resource Manager - releasing VGs for this RG
+epprd_rg:cl_deactivate_vgs[700] cl_RMupdate resource_releasing All_volume_groups cl_deactivate_vgs
2023-01-28T18:00:15.542294
2023-01-28T18:00:15.546779
+epprd_rg:cl_deactivate_vgs[703] : Process the volume groups for this resource group
+epprd_rg:cl_deactivate_vgs:datavg[707] PS4_LOOP=datavg
+epprd_rg:cl_deactivate_vgs:datavg[711] print datavg caavg_private rootvg
+epprd_rg:cl_deactivate_vgs:datavg[711] grep -qw datavg
+epprd_rg:cl_deactivate_vgs:datavg[719] : Thie VG is varied on, so go vary it off. Get the VG mode first
+epprd_rg:cl_deactivate_vgs:datavg[721] MODE=9999
+epprd_rg:cl_deactivate_vgs:datavg[722] /usr/sbin/getlvodm -v datavg
+epprd_rg:cl_deactivate_vgs:datavg[722] VGID=00c44af100004b00000001851e9dc053
+epprd_rg:cl_deactivate_vgs:datavg[723] lqueryvg -g 00c44af100004b00000001851e9dc053 -X
+epprd_rg:cl_deactivate_vgs:datavg[723] MODE=32
+epprd_rg:cl_deactivate_vgs:datavg[724] RC=0
+epprd_rg:cl_deactivate_vgs:datavg[725] (( 0 != 0 ))
+epprd_rg:cl_deactivate_vgs:datavg[726] : exit status of lqueryvg -g 00c44af100004b00000001851e9dc053 -X: 0
+epprd_rg:cl_deactivate_vgs:datavg[728] vgs_varyoff datavg 32
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:60] PS4_TIMER=true
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:60] typeset PS4_TIMER
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:61] [[ high == high ]]
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:61] set -x
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:63] VG=datavg
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:63] typeset VG
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:64] MODE=32
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:64] typeset MODE
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:66] OPEN_FSs=''
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:66] typeset OPEN_FSs
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:67] OPEN_LVs=''
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:67] typeset OPEN_LVs
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:68] typeset TMP_VG_LIST
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:69] TS_FLAGS=''
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:69] typeset TS_FLAGS
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:71] STATUS=0
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:71] typeset -li STATUS
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:72] RC=0
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:72] typeset -li RC
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:73] SELECTIVE_FAILOVER=false
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:73] typeset SELECTIVE_FAILOVER
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:74] typeset LV
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:75] lv_list=''
+epprd_rg:cl_deactivate_vgs(0.093):datavg[vgs_varyoff:75] typeset lv_list
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:76] typeset FS
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:77] FS_MOUNTED=''
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:77] typeset FS_MOUNTED
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:79] rc_fuser=0
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:79] typeset -li rc_fuser
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:80] rc_varyonvg=0
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:80] typeset -li rc_varyonvg
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:81] rc_varyoffvg=0
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:81] typeset -li rc_varyoffvg
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:82] rc_lsvg=0
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:82] typeset -li rc_lsvg
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:83] rc_dfs=0
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:83] typeset -li rc_dfs
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:84] rc_dvg=0
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:84] typeset -li rc_dvg
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:88] typeset -li FV FR FM FF
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:89] typeset -Z2 FR
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:90] typeset -Z3 FM
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:91] typeset -Z3 FF
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:92] FVRMF=0
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:92] typeset -li FVRMF
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:93] fuser_lvl=601004000
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:93] typeset -li fuser_lvl
+epprd_rg:cl_deactivate_vgs(0.094):datavg[vgs_varyoff:95] lsvg -l -L datavg
+epprd_rg:cl_deactivate_vgs(0.095):datavg[vgs_varyoff:95] 2> /dev/null
+epprd_rg:cl_deactivate_vgs(0.115):datavg[vgs_varyoff:95] TMP_VG_LIST=$'datavg:\nLV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT\nepprdaloglv jfs2log 1 1 1 closed/syncd N/A\nsaplv jfs2 100 100 7 closed/syncd /usr/sap\nsapmntlv jfs2 20 20 7 closed/syncd /sapmnt\noraclelv jfs2 40 40 7 closed/syncd /oracle\nepplv jfs2 60 60 7 closed/syncd /oracle/EPP\noraarchlv jfs2 100 100 7 closed/syncd /oracle/EPP/oraarch\nsapdata1lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata1\nsapdata2lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata2\nsapdata3lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata3\nsapdata4lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata4\nboardlv jfs2 10 10 7 closed/syncd /board_org\noriglogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogA\noriglogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogB\nmirrlogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogA\nmirrlogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_vgs(0.115):datavg[vgs_varyoff:96] rc_lsvg=0
+epprd_rg:cl_deactivate_vgs(0.115):datavg[vgs_varyoff:98] [[ RELEASE_PRIMARY == reconfig* ]]
+epprd_rg:cl_deactivate_vgs(0.115):datavg[vgs_varyoff:114] [[ -n $'datavg:\nLV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT\nepprdaloglv jfs2log 1 1 1 closed/syncd N/A\nsaplv jfs2 100 100 7 closed/syncd /usr/sap\nsapmntlv jfs2 20 20 7 closed/syncd /sapmnt\noraclelv jfs2 40 40 7 closed/syncd /oracle\nepplv jfs2 60 60 7 closed/syncd /oracle/EPP\noraarchlv jfs2 100 100 7 closed/syncd /oracle/EPP/oraarch\nsapdata1lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata1\nsapdata2lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata2\nsapdata3lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata3\nsapdata4lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata4\nboardlv jfs2 10 10 7 closed/syncd /board_org\noriglogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogA\noriglogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogB\nmirrlogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogA\nmirrlogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogB' ]]
+epprd_rg:cl_deactivate_vgs(0.115):datavg[vgs_varyoff:117] : Get list of open logical volumes corresponding to filesystems
+epprd_rg:cl_deactivate_vgs(0.116):datavg[vgs_varyoff:119] awk '$2 ~ /jfs2?$/ && $6 ~ /open/ {print $1}'
+epprd_rg:cl_deactivate_vgs(0.116):datavg[vgs_varyoff:119] print $'datavg:\nLV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT\nepprdaloglv jfs2log 1 1 1 closed/syncd N/A\nsaplv jfs2 100 100 7 closed/syncd /usr/sap\nsapmntlv jfs2 20 20 7 closed/syncd /sapmnt\noraclelv jfs2 40 40 7 closed/syncd /oracle\nepplv jfs2 60 60 7 closed/syncd /oracle/EPP\noraarchlv jfs2 100 100 7 closed/syncd /oracle/EPP/oraarch\nsapdata1lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata1\nsapdata2lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata2\nsapdata3lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata3\nsapdata4lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata4\nboardlv jfs2 10 10 7 closed/syncd /board_org\noriglogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogA\noriglogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogB\nmirrlogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogA\nmirrlogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:119] OPEN_LVs=''
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:122] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:140] [[ -n '' ]]
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:167] [[ TRUE == TRUE ]]
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:170] : For two-node clusters, special processing for the highly available NFS
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:171] : server function: tell NFS to dump the dup cache into the jfslog or jfs2log
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:175] : Find the first log device in the saved list of logical volumes
+epprd_rg:cl_deactivate_vgs(0.120):datavg[vgs_varyoff:177] pattern='jsf2?log'
+epprd_rg:cl_deactivate_vgs(0.122):datavg[vgs_varyoff:178] awk '$2 ~ /jsf2?log/ {printf "/dev/%s\n", $1 ; exit}'
+epprd_rg:cl_deactivate_vgs(0.122):datavg[vgs_varyoff:178] print $'datavg:\nLV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT\nepprdaloglv jfs2log 1 1 1 closed/syncd N/A\nsaplv jfs2 100 100 7 closed/syncd /usr/sap\nsapmntlv jfs2 20 20 7 closed/syncd /sapmnt\noraclelv jfs2 40 40 7 closed/syncd /oracle\nepplv jfs2 60 60 7 closed/syncd /oracle/EPP\noraarchlv jfs2 100 100 7 closed/syncd /oracle/EPP/oraarch\nsapdata1lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata1\nsapdata2lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata2\nsapdata3lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata3\nsapdata4lv jfs2 100 100 7 closed/syncd /oracle/EPP/sapdata4\nboardlv jfs2 10 10 7 closed/syncd /board_org\noriglogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogA\noriglogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/origlogB\nmirrlogAlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogA\nmirrlogBlv jfs2 10 10 7 closed/syncd /oracle/EPP/mirrlogB'
+epprd_rg:cl_deactivate_vgs(0.126):datavg[vgs_varyoff:178] logdev=''
+epprd_rg:cl_deactivate_vgs(0.126):datavg[vgs_varyoff:180] [[ -z '' ]]
+epprd_rg:cl_deactivate_vgs(0.126):datavg[vgs_varyoff:181] [[ true == true ]]
+epprd_rg:cl_deactivate_vgs(0.126):datavg[vgs_varyoff:182] [[ ONLINE != ONLINE ]]
+epprd_rg:cl_deactivate_vgs(0.126):datavg[vgs_varyoff:216] [[ -n '' ]]
+epprd_rg:cl_deactivate_vgs(0.126):datavg[vgs_varyoff:223] : Finally, vary off the volume group
+epprd_rg:cl_deactivate_vgs(0.126):datavg[vgs_varyoff:226] amlog_trace '' 'Deactivating Volume Group|datavg'
+epprd_rg:cl_deactivate_vgs(0.126):datavg[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_vgs(0.127):datavg[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_vgs(0.151):datavg[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_vgs(0.154):datavg[amlog_trace:319] DATE=2023-01-28T18:00:15.633975
+epprd_rg:cl_deactivate_vgs(0.154):datavg[amlog_trace:320] echo '|2023-01-28T18:00:15.633975|INFO: Deactivating Volume Group|datavg'
+epprd_rg:cl_deactivate_vgs(0.154):datavg[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_vgs(0.154):datavg[vgs_varyoff:228] [[ 32 == 32 ]]
+epprd_rg:cl_deactivate_vgs(0.154):datavg[vgs_varyoff:231] : This VG is ECM. Move to passive mode.
+epprd_rg:cl_deactivate_vgs(0.154):datavg[vgs_varyoff:244] TS_FLAGS=-o
+epprd_rg:cl_deactivate_vgs(0.154):datavg[vgs_varyoff:245] cltime
2023-01-28T18:00:15.636656
+epprd_rg:cl_deactivate_vgs(0.157):datavg[vgs_varyoff:246] varyonvg -c -n -P datavg
+epprd_rg:cl_deactivate_vgs(0.157):datavg[vgs_varyoff:246] 2> /dev/null
+epprd_rg:cl_deactivate_vgs(0.291):datavg[vgs_varyoff:247] rc_varyonvg=0
+epprd_rg:cl_deactivate_vgs(0.291):datavg[vgs_varyoff:248] : return code from varyonvg -c -n -P datavg is 0
+epprd_rg:cl_deactivate_vgs(0.291):datavg[vgs_varyoff:249] cltime
2023-01-28T18:00:15.774180
+epprd_rg:cl_deactivate_vgs(0.294):datavg[vgs_varyoff:250] (( 0 != 0 ))
+epprd_rg:cl_deactivate_vgs(0.294):datavg[vgs_varyoff:277] [[ 0 != 0 ]]
+epprd_rg:cl_deactivate_vgs(0.294):datavg[vgs_varyoff:281] amlog_trace '' 'Deactivating Volume Group|datavg'
+epprd_rg:cl_deactivate_vgs(0.294):datavg[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_deactivate_vgs(0.295):datavg[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_deactivate_vgs(0.320):datavg[amlog_trace:319] cltime
+epprd_rg:cl_deactivate_vgs(0.322):datavg[amlog_trace:319] DATE=2023-01-28T18:00:15.802424
+epprd_rg:cl_deactivate_vgs(0.322):datavg[amlog_trace:320] echo '|2023-01-28T18:00:15.802424|INFO: Deactivating Volume Group|datavg'
+epprd_rg:cl_deactivate_vgs(0.322):datavg[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_deactivate_vgs(0.322):datavg[vgs_varyoff:284] RC=0
+epprd_rg:cl_deactivate_vgs(0.322):datavg[vgs_varyoff:287] : Update LVM volume group timestamps in ODM
+epprd_rg:cl_deactivate_vgs(0.322):datavg[vgs_varyoff:289] cl_update_vg_odm_ts -o datavg
+epprd_rg:cl_update_vg_odm_ts(0.001)[77] version=1.13
+epprd_rg:cl_update_vg_odm_ts(0.001)[121] o_flag=''
+epprd_rg:cl_update_vg_odm_ts(0.001)[122] f_flag=''
+epprd_rg:cl_update_vg_odm_ts(0.001)[123] getopts :of option
+epprd_rg:cl_update_vg_odm_ts(0.001)[126] : Local timestamps should be good, since volume group was
+epprd_rg:cl_update_vg_odm_ts(0.001)[127] : just varyied on or off
+epprd_rg:cl_update_vg_odm_ts(0.001)[128] o_flag=TRUE
+epprd_rg:cl_update_vg_odm_ts(0.001)[123] getopts :of option
+epprd_rg:cl_update_vg_odm_ts(0.001)[142] shift 1
+epprd_rg:cl_update_vg_odm_ts(0.001)[144] vg_name=datavg
+epprd_rg:cl_update_vg_odm_ts(0.001)[145] [[ -z datavg ]]
+epprd_rg:cl_update_vg_odm_ts(0.001)[151] shift
+epprd_rg:cl_update_vg_odm_ts(0.001)[152] node_list=''
+epprd_rg:cl_update_vg_odm_ts(0.001)[153] /usr/es/sbin/cluster/utilities/cl_get_path all
+epprd_rg:cl_update_vg_odm_ts(0.004)[153] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin
+epprd_rg:cl_update_vg_odm_ts(0.004)[155] [[ -z '' ]]
+epprd_rg:cl_update_vg_odm_ts(0.004)[158] : Check to see if this update is necessary - some LVM levels automatically
+epprd_rg:cl_update_vg_odm_ts(0.004)[159] : update volume group timestamps clusterwide.
+epprd_rg:cl_update_vg_odm_ts(0.004)[163] instfix -iqk IV74100
+epprd_rg:cl_update_vg_odm_ts(0.005)[163] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.012)[164] instfix -iqk IV74883
+epprd_rg:cl_update_vg_odm_ts(0.013)[164] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.020)[165] instfix -iqk IV74698
+epprd_rg:cl_update_vg_odm_ts(0.021)[165] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.028)[166] instfix -iqk IV74246
+epprd_rg:cl_update_vg_odm_ts(0.028)[166] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.035)[174] emgr -l -L IV74883
+epprd_rg:cl_update_vg_odm_ts(0.036)[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.291)[174] emgr -l -L IV74698
+epprd_rg:cl_update_vg_odm_ts(0.292)[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.543)[174] emgr -l -L IV74246
+epprd_rg:cl_update_vg_odm_ts(0.544)[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.795)[183] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_update_vg_odm_ts(0.795)[184] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_update_vg_odm_ts(0.795)[185] : 99.99.999.999
+epprd_rg:cl_update_vg_odm_ts(0.795)[187] typeset -li V R M F
+epprd_rg:cl_update_vg_odm_ts(0.795)[188] typeset -Z2 V
+epprd_rg:cl_update_vg_odm_ts(0.795)[189] typeset -Z2 R
+epprd_rg:cl_update_vg_odm_ts(0.795)[190] typeset -Z3 M
+epprd_rg:cl_update_vg_odm_ts(0.795)[191] typeset -Z3 F
+epprd_rg:cl_update_vg_odm_ts(0.795)[192] lvm_lvl6=601008015
+epprd_rg:cl_update_vg_odm_ts(0.795)[192] typeset -li lvm_lvl6
+epprd_rg:cl_update_vg_odm_ts(0.795)[194] lvm_lvl7=701003046
+epprd_rg:cl_update_vg_odm_ts(0.795)[194] typeset -li lvm_lvl7
+epprd_rg:cl_update_vg_odm_ts(0.795)[195] VRMF=0
+epprd_rg:cl_update_vg_odm_ts(0.795)[195] typeset -li VRMF
+epprd_rg:cl_update_vg_odm_ts(0.795)[198] : Here try and figure out what level of LVM is installed
+epprd_rg:cl_update_vg_odm_ts(0.796)[200] lslpp -lcqOr bos.rte.lvm
+epprd_rg:cl_update_vg_odm_ts(0.797)[200] cut -f3 -d:
+epprd_rg:cl_update_vg_odm_ts(0.799)[200] read V R M F
+epprd_rg:cl_update_vg_odm_ts(0.799)[200] IFS=.
+epprd_rg:cl_update_vg_odm_ts(0.799)[201] VRMF=0702005101
+epprd_rg:cl_update_vg_odm_ts(0.799)[203] (( 7 == 6 && 702005101 >= 601008015 ))
+epprd_rg:cl_update_vg_odm_ts(0.799)[204] (( 702005101 >= 701003046 ))
+epprd_rg:cl_update_vg_odm_ts(0.799)[207] : LVM at a level in which timestamp update is unnecessary
+epprd_rg:cl_update_vg_odm_ts(0.799)[209] return 0
+epprd_rg:cl_deactivate_vgs(1.125):datavg[vgs_varyoff:291] (( 0 == 0 ))
+epprd_rg:cl_deactivate_vgs(1.126):datavg[vgs_varyoff:294] : successful varyoff, set the fence height to read-only
+epprd_rg:cl_deactivate_vgs(1.126):datavg[vgs_varyoff:297] cl_set_vg_fence_height -c datavg ro
cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=ro(2))
+epprd_rg:cl_deactivate_vgs(1.129):datavg[vgs_varyoff:298] RC=0
+epprd_rg:cl_deactivate_vgs(1.129):datavg[vgs_varyoff:299] (( 0 != 0 ))
+epprd_rg:cl_deactivate_vgs(1.129):datavg[vgs_varyoff:403] : Append status to the status file.
+epprd_rg:cl_deactivate_vgs(1.129):datavg[vgs_varyoff:407] echo datavg 0
+epprd_rg:cl_deactivate_vgs(1.129):datavg[vgs_varyoff:407] 1>> /tmp/_deactivate_vgs.tmp
+epprd_rg:cl_deactivate_vgs(1.130):datavg[vgs_varyoff:408] return 0
+epprd_rg:cl_deactivate_vgs(1.130):datavg[731] unset PS4_LOOP
+epprd_rg:cl_deactivate_vgs(1.130)[736] : Wait for the background instances of vgs_varyoff
+epprd_rg:cl_deactivate_vgs(1.130)[738] wait
+epprd_rg:cl_deactivate_vgs(1.130)[741] : Collect any failure indications from backgrounded varyoff processing
+epprd_rg:cl_deactivate_vgs(1.130)[743] [[ -f /tmp/_deactivate_vgs.tmp ]]
+epprd_rg:cl_deactivate_vgs(1.131)[748] cat /tmp/_deactivate_vgs.tmp
+epprd_rg:cl_deactivate_vgs(1.131)[748] read VGNAME VARYOFF_STATUS
+epprd_rg:cl_deactivate_vgs(1.132)[750] [[ 0 == 1 ]]
+epprd_rg:cl_deactivate_vgs(1.132)[748] read VGNAME VARYOFF_STATUS
+epprd_rg:cl_deactivate_vgs(1.133)[765] rm -f /tmp/_deactivate_vgs.tmp
+epprd_rg:cl_deactivate_vgs(1.135)[769] : Update Resource Manager - release success for the non-error VGs
+epprd_rg:cl_deactivate_vgs(1.135)[771] ALLNOERRVGS=All_nonerror_volume_groups
+epprd_rg:cl_deactivate_vgs(1.135)[772] [[ true == false ]]
+epprd_rg:cl_deactivate_vgs(1.135)[778] cl_RMupdate resource_down All_nonerror_volume_groups cl_deactivate_vgs
2023-01-28T18:00:16.637900
2023-01-28T18:00:16.642379
+epprd_rg:cl_deactivate_vgs(1.163)[782] [[ FALSE == TRUE ]]
+epprd_rg:cl_deactivate_vgs(1.163)[791] exit 0
+epprd_rg:process_resources[process_volume_groups:2606] RC=0
+epprd_rg:process_resources[process_volume_groups:2607] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[process_volume_groups:2620] (( 0 != 0 ))
+epprd_rg:process_resources[process_volume_groups:2627] return 0
+epprd_rg:process_resources[process_volume_groups_main:2556] STAT=0
+epprd_rg:process_resources[process_volume_groups_main:2559] return 0
+epprd_rg:process_resources[3572] RC=0
+epprd_rg:process_resources[3573] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[3575] [[ 0 != 0 ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:16.655828 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=SERVICE_LABELS ACTION=RELEASE IP_LABELS='"epprd"' RESOURCE_GROUPS='"epprd_rg' '"' COMMUNICATION_LINKS='""'
+epprd_rg:process_resources[1] JOB_TYPE=SERVICE_LABELS
+epprd_rg:process_resources[1] ACTION=RELEASE
+epprd_rg:process_resources[1] IP_LABELS=epprd
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] COMMUNICATION_LINKS=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ SERVICE_LABELS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ SERVICE_LABELS == ONLINE ]]
+epprd_rg:process_resources[3407] [[ RELEASE == ACQUIRE ]]
+epprd_rg:process_resources[3411] release_service_labels
+epprd_rg:process_resources[release_service_labels:3125] PS4_FUNC=release_service_labels
+epprd_rg:process_resources[release_service_labels:3125] typeset PS4_FUNC
+epprd_rg:process_resources[release_service_labels:3126] [[ high == high ]]
+epprd_rg:process_resources[release_service_labels:3126] set -x
+epprd_rg:process_resources[release_service_labels:3127] STAT=0
+epprd_rg:process_resources[release_service_labels:3128] clcallev release_service_addr
Jan 28 2023 18:00:16 EVENT START: release_service_addr
|2023-01-28T18:00:16|22169|EVENT START: release_service_addr |
+epprd_rg:release_service_addr[87] version=1.44
+epprd_rg:release_service_addr[90] STATUS=0
+epprd_rg:release_service_addr[91] PROC_RES=false
+epprd_rg:release_service_addr[95] [[ SERVICE_LABELS != 0 ]]
+epprd_rg:release_service_addr[95] [[ SERVICE_LABELS != GROUP ]]
+epprd_rg:release_service_addr[96] PROC_RES=true
+epprd_rg:release_service_addr[97] _IP_LABELS=epprd
+epprd_rg:release_service_addr[109] saveNSORDER=UNDEFINED
+epprd_rg:release_service_addr[110] NSORDER=local
+epprd_rg:release_service_addr[110] export NSORDER
+epprd_rg:release_service_addr[117] export GROUPNAME
+epprd_rg:release_service_addr[119] [[ true == true ]]
+epprd_rg:release_service_addr[120] get_list_head epprd
+epprd_rg:release_service_addr[120] read SERVICELABELS
+epprd_rg:release_service_addr[121] get_list_tail epprd
+epprd_rg:release_service_addr[121] read IP_LABELS
+epprd_rg:release_service_addr[127] cl_RMupdate resource_releasing All_service_addrs release_service_addr
2023-01-28T18:00:16.741228
2023-01-28T18:00:16.745688
+epprd_rg:release_service_addr[136] clgetif -a epprd
+epprd_rg:release_service_addr[136] LC_ALL=C
en0
+epprd_rg:release_service_addr[137] return_code=0
+epprd_rg:release_service_addr[137] typeset -li return_code
+epprd_rg:release_service_addr[138] (( 0 ))
+epprd_rg:release_service_addr[159] cllsif -J '~' -Sn epprd
+epprd_rg:release_service_addr[159] cut -d~ -f7
+epprd_rg:release_service_addr[159] uniq
+epprd_rg:release_service_addr[159] textual_addr=61.81.244.156
+epprd_rg:release_service_addr[160] clgetif -a 61.81.244.156
+epprd_rg:release_service_addr[160] LC_ALL=C
+epprd_rg:release_service_addr[160] INTERFACE='en0 '
+epprd_rg:release_service_addr[161] [[ -z 'en0 ' ]]
+epprd_rg:release_service_addr[182] clgetif -n 61.81.244.156
+epprd_rg:release_service_addr[182] LC_ALL=C
+epprd_rg:release_service_addr[182] NETMASK='255.255.255.0 '
+epprd_rg:release_service_addr[183] cllsif -J '~'
+epprd_rg:release_service_addr[183] grep -wF 61.81.244.156
+epprd_rg:release_service_addr[184] cut -d~ -f3
+epprd_rg:release_service_addr[184] sort -u
+epprd_rg:release_service_addr[183] NETWORK=net_ether_01
+epprd_rg:release_service_addr[189] cllsif -J '~' -Si epprda
+epprd_rg:release_service_addr[189] grep '~boot~'
+epprd_rg:release_service_addr[190] cut -d~ -f3,7
+epprd_rg:release_service_addr[190] grep ^net_ether_01~
+epprd_rg:release_service_addr[191] cut -d~ -f2
+epprd_rg:release_service_addr[191] tail -1
+epprd_rg:release_service_addr[189] BOOT=61.81.244.134
+epprd_rg:release_service_addr[193] [[ -z 61.81.244.134 ]]
+epprd_rg:release_service_addr[214] [[ -n 'en0 ' ]]
+epprd_rg:release_service_addr[216] cut -f15 -d~
+epprd_rg:release_service_addr[216] cllsif -J '~' -Sn 61.81.244.156
+epprd_rg:release_service_addr[216] [[ AF_INET == AF_INET6 ]]
+epprd_rg:release_service_addr[221] cl_swap_IP_address rotating release en0 61.81.244.134 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[462] version=1.9.14.8
+epprd_rg:cl_swap_IP_address[464] cl_get_path -S
+epprd_rg:cl_swap_IP_address[464] OP_SEP='~'
+epprd_rg:cl_swap_IP_address[465] LC_ALL=C
+epprd_rg:cl_swap_IP_address[465] export LC_ALL
+epprd_rg:cl_swap_IP_address[466] RESTORE_ROUTES=/usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[468] cl_echo 33 'Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en0 61.81.244.134 61.81.244.156 255.255.255.0' /usr/es/sbin/cluster/events/utils/cl_swap_IP_address 'rotating release en0 61.81.244.134 61.81.244.156 255.255.255.0'
Jan 28 2023 18:00:16Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en0 61.81.244.134 61.81.244.156 255.255.255.0+epprd_rg:cl_swap_IP_address[470] typeset -i oslevel
+epprd_rg:cl_swap_IP_address[471] /usr/bin/sed s/-//g
+epprd_rg:cl_swap_IP_address[471] /usr/bin/oslevel -r
+epprd_rg:cl_swap_IP_address[471] oslevel=720005
+epprd_rg:cl_swap_IP_address[476] [[ 6 == 6 ]]
+epprd_rg:cl_swap_IP_address[477] [[ 6 == 7 ]]
+epprd_rg:cl_swap_IP_address[484] no -a
+epprd_rg:cl_swap_IP_address[484] grep ipignoreredirects
+epprd_rg:cl_swap_IP_address[484] awk '{ print $3 }'
+epprd_rg:cl_swap_IP_address[484] PRIOR_IPIGNORE_REDIRECTS_VALUE=0
+epprd_rg:cl_swap_IP_address[485] /usr/sbin/no -o ipignoreredirects=1
Setting ipignoreredirects to 1
+epprd_rg:cl_swap_IP_address[490] PROC_RES=false
+epprd_rg:cl_swap_IP_address[491] [[ SERVICE_LABELS != 0 ]]
+epprd_rg:cl_swap_IP_address[491] [[ SERVICE_LABELS != GROUP ]]
+epprd_rg:cl_swap_IP_address[492] PROC_RES=true
+epprd_rg:cl_swap_IP_address[495] set -u
+epprd_rg:cl_swap_IP_address[497] RC=0
+epprd_rg:cl_swap_IP_address[504] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 fa.e6.13.4e.a9.20 183735410 0 60752085 0 0
en0 1500 61.81.244 61.81.244.156 183735410 0 60752085 0 0
en0 1500 61.81.244 61.81.244.134 183735410 0 60752085 0 0
lo0 16896 link#1 34267429 0 34267429 0 0
lo0 16896 127 127.0.0.1 34267429 0 34267429 0 0
lo0 16896 ::1%1 34267429 0 34267429 0 0
+epprd_rg:cl_swap_IP_address[505] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
default 61.81.244.1 UG 1 - en0 0 0
61.81.244.0 61.81.244.156 UHSb 1 - en0 0 0 =>
61.81.244/24 61.81.244.156 U 1 - en0 0 0
61.81.244.134 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.156 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.255 61.81.244.156 UHSb 1 - en0 0 0
127/8 127.0.0.1 U 1 - lo0 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+epprd_rg:cl_swap_IP_address[506] CASC_OR_ROT=rotating
+epprd_rg:cl_swap_IP_address[507] ACQ_OR_RLSE=release
+epprd_rg:cl_swap_IP_address[508] IF=en0
+epprd_rg:cl_swap_IP_address[509] ADDR=61.81.244.134
+epprd_rg:cl_swap_IP_address[510] OLD_ADDR=61.81.244.156
+epprd_rg:cl_swap_IP_address[511] NETMASK=255.255.255.0
+epprd_rg:cl_swap_IP_address[514] [[ rotating == cascading ]]
+epprd_rg:cl_swap_IP_address[525] cut -f3 -d~
+epprd_rg:cl_swap_IP_address[525] cllsif -J '~' -Sw -n 61.81.244.134
+epprd_rg:cl_swap_IP_address[525] NET=net_ether_01
+epprd_rg:cl_swap_IP_address[528] clodmget -qidentifier=61.81.244.134 -f max_aliases -n HACMPadapter
+epprd_rg:cl_swap_IP_address[528] ALIAS_FIRST=0
+epprd_rg:cl_swap_IP_address[529] grep -c -w inet
+epprd_rg:cl_swap_IP_address[529] ifconfig en0
+epprd_rg:cl_swap_IP_address[529] LC_ALL=C
+epprd_rg:cl_swap_IP_address[529] NUM_ADDRS=2
+epprd_rg:cl_swap_IP_address[530] [[ release == acquire ]]
+epprd_rg:cl_swap_IP_address[598] cl_echo 7320 'cl_swap_IP_address: Removing aliased IP address 61.81.244.156 from adapter en0' cl_swap_IP_address 61.81.244.156 en0
Jan 28 2023 18:00:16cl_swap_IP_address: Removing aliased IP address 61.81.244.156 from adapter en0+epprd_rg:cl_swap_IP_address[600] amlog_trace '' 'Deliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_swap_IP_address[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_swap_IP_address[amlog_trace:319] cltime
+epprd_rg:cl_swap_IP_address[amlog_trace:319] DATE=2023-01-28T18:00:16.987832
+epprd_rg:cl_swap_IP_address[amlog_trace:320] echo '|2023-01-28T18:00:16.987832|INFO: Deliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_swap_IP_address[601] PERSISTENT=''
+epprd_rg:cl_swap_IP_address[602] ADDR1=61.81.244.156
+epprd_rg:cl_swap_IP_address[603] disable_pmtu_gated
Setting tcp_pmtu_discover to 0
Setting udp_pmtu_discover to 0
+epprd_rg:cl_swap_IP_address[604] alias_replace_routes /usr/es/sbin/cluster/.restore_routes en0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:168] RR=/usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[alias_replace_routes:169] shift
+epprd_rg:cl_swap_IP_address[alias_replace_routes:170] interfaces=en0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:171] RC=0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:173] cp /dev/null /usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[alias_replace_routes:175] cat
+epprd_rg:cl_swap_IP_address[alias_replace_routes:175] 1> /usr/es/sbin/cluster/.restore_routes 0<< \EOF
+epprd_rg:cl_swap_IP_address[alias_replace_routes:175] date
#!/bin/ksh
#
# Script created by cl_swap_IP_address on Sat Jan 28 18:00:17 KORST 2023
#
PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin
PS4='${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}${KSH_VERSION:+${.sh.fun:+${.sh.fun}:}}$LINENO] '
export VERBOSE_LOGGING=${VERBOSE_LOGGING:-"high"}
[[ "$VERBOSE_LOGGING" = "high" ]] && set -x
: Starting $0 at $(date)
#
EOF
+epprd_rg:cl_swap_IP_address[alias_replace_routes:189] awk '$3 !~ "[Ll]ink" && $3 !~ ":" && $3 !~ "Network" {print $4}'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:189] netstat -in
+epprd_rg:cl_swap_IP_address[alias_replace_routes:189] LOCADDRS=$'61.81.244.156\n61.81.244.134\n127.0.0.1'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:191] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
default 61.81.244.1 UG 1 - en0 0 0
61.81.244.0 61.81.244.156 UHSb 1 - en0 0 0 =>
61.81.244/24 61.81.244.156 U 1 - en0 0 0
61.81.244.134 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.156 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.255 61.81.244.156 UHSb 1 - en0 0 0
127/8 127.0.0.1 U 1 - lo0 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:200] I=1
+epprd_rg:cl_swap_IP_address[alias_replace_routes:200] typeset -li I
+epprd_rg:cl_swap_IP_address[alias_replace_routes:201] NXTSVC=''
+epprd_rg:cl_swap_IP_address[alias_replace_routes:203] awk '$3 !~ "[Ll]ink" && $3 !~ ":" && ($1 == "en0" || $1 == "en0*") {print $4}'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:203] netstat -in
+epprd_rg:cl_swap_IP_address[alias_replace_routes:203] IFADDRS=$'61.81.244.156\n61.81.244.134'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:204] cllsif -J '~' -Spi epprda
+epprd_rg:cl_swap_IP_address[alias_replace_routes:204] grep '~net_ether_01~'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:205] grep -E '~service~|~persistent~'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:205] cut -d~ -f7
+epprd_rg:cl_swap_IP_address[alias_replace_routes:205] sort -u
+epprd_rg:cl_swap_IP_address[alias_replace_routes:204] SVCADDRS=61.81.244.156
+epprd_rg:cl_swap_IP_address[alias_replace_routes:210] awk '$1 !~ ":" {print $1}'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:210] echo 61.81.244.156
+epprd_rg:cl_swap_IP_address[alias_replace_routes:210] SVCADDRS=61.81.244.156
+epprd_rg:cl_swap_IP_address[alias_replace_routes:212] cllsif -J '~' -Spi epprda
+epprd_rg:cl_swap_IP_address[alias_replace_routes:212] grep '~net_ether_01~'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:213] grep -E '~persistent~'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:214] cut -d~ -f7
+epprd_rg:cl_swap_IP_address[alias_replace_routes:212] PERSISTENT_IP=''
+epprd_rg:cl_swap_IP_address[alias_replace_routes:215] routeaddr=''
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:223] [[ -z '' ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:225] routeaddr=61.81.244.156
+epprd_rg:cl_swap_IP_address[alias_replace_routes:227] [[ 61.81.244.156 != 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:234] [[ -n '' ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.134 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.134 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:221] [[ 61.81.244.134 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:234] [[ -n '' ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:243] NXTADDR=''
+epprd_rg:cl_swap_IP_address[alias_replace_routes:244] bootaddr=''
+epprd_rg:cl_swap_IP_address[alias_replace_routes:245] [[ -z '' ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:247] cllsif -J '~' -Spi epprda
+epprd_rg:cl_swap_IP_address[alias_replace_routes:247] grep '~net_ether_01~'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:247] grep '~boot~'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:247] awk -F~ '$9 == "en0" { print $7; }'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:247] bootaddr=61.81.244.134
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] [[ 61.81.244.156 == 61.81.244.134 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] [[ 61.81.244.134 == 61.81.244.134 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] clgetnet 61.81.244.134 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] clgetnet 61.81.244.134 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] clgetnet 61.81.244.134 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:250] [[ 61.81.244.134 != 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:252] NXTADDR=61.81.244.134
+epprd_rg:cl_swap_IP_address[alias_replace_routes:253] break
+epprd_rg:cl_swap_IP_address[alias_replace_routes:258] swaproute=0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:259] NETSTAT_FLAGS='-nrf inet'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:261] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:264] swaproute=1
+epprd_rg:cl_swap_IP_address[alias_replace_routes:267] netstat -nrf inet
+epprd_rg:cl_swap_IP_address[alias_replace_routes:267] fgrep -w en0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:267] read DEST GW FLAGS OTHER
+epprd_rg:cl_swap_IP_address[alias_replace_routes:268] LOOPBACK=127.0.0.1
+epprd_rg:cl_swap_IP_address[alias_replace_routes:336] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:336] clgetnet 61.81.244.1 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:336] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:338] [[ 0 == 0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:341] [[ -z release ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:341] [[ 61.81.244.156 == ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:346] print 'cl_route_change default 127.0.0.1 61.81.244.1 inet'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:346] 1>> /usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[alias_replace_routes:347] add_rc_check /usr/es/sbin/cluster/.restore_routes cl_route_change
+epprd_rg:cl_swap_IP_address[add_rc_check:70] RR=/usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[add_rc_check:71] FUNC=cl_route_change
+epprd_rg:cl_swap_IP_address[add_rc_check:73] cat
+epprd_rg:cl_swap_IP_address[add_rc_check:73] 1>> /usr/es/sbin/cluster/.restore_routes 0<< \EOF
rc=$?
if [[ $rc != 0 ]]
then
echo "ERROR: cl_route_change failed with code $rc"
cl_route_change_RC=$rc
fi
EOF
+epprd_rg:cl_swap_IP_address[alias_replace_routes:350] cl_route_change default 61.81.244.1 127.0.0.1 inet
+epprd_rg:cl_swap_IP_address[alias_replace_routes:351] RC=0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:352] : cl_route_change completed with 0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:353] I=I+1
+epprd_rg:cl_swap_IP_address[alias_replace_routes:267] read DEST GW FLAGS OTHER
+epprd_rg:cl_swap_IP_address[alias_replace_routes:268] LOOPBACK=127.0.0.2
+epprd_rg:cl_swap_IP_address[alias_replace_routes:290] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:292] [[ '' != '' ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:292] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:294] route delete -host 61.81.244.0 61.81.244.156
61.81.244.156 host 61.81.244.0: gateway 61.81.244.156
+epprd_rg:cl_swap_IP_address[alias_replace_routes:267] read DEST GW FLAGS OTHER
+epprd_rg:cl_swap_IP_address[alias_replace_routes:268] LOOPBACK=127.0.0.2
+epprd_rg:cl_swap_IP_address[alias_replace_routes:272] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:272] clgetnet 61.81.244.156 255.255.255.0
+epprd_rg:cl_swap_IP_address[alias_replace_routes:272] [[ 61.81.244.0 == 61.81.244.0 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:274] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:276] [[ '' != '' ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:276] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:279] route delete -net 61.81.244/24 61.81.244.156
61.81.244.156 net 61.81.244: gateway 61.81.244.156
+epprd_rg:cl_swap_IP_address[alias_replace_routes:267] read DEST GW FLAGS OTHER
+epprd_rg:cl_swap_IP_address[alias_replace_routes:268] LOOPBACK=127.0.0.2
+epprd_rg:cl_swap_IP_address[alias_replace_routes:290] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:292] [[ '' != '' ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:292] [[ 61.81.244.156 == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[alias_replace_routes:294] route delete -host 61.81.244.255 61.81.244.156
61.81.244.156 host 61.81.244.255: gateway 61.81.244.156
+epprd_rg:cl_swap_IP_address[alias_replace_routes:267] read DEST GW FLAGS OTHER
+epprd_rg:cl_swap_IP_address[alias_replace_routes:360] echo 'exit $cl_route_change_RC'
+epprd_rg:cl_swap_IP_address[alias_replace_routes:360] 1>> /usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[alias_replace_routes:361] chmod +x /usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[alias_replace_routes:362] return 0
+epprd_rg:cl_swap_IP_address[605] RC=0
+epprd_rg:cl_swap_IP_address[606] : alias_replace_routes completed with 0
+epprd_rg:cl_swap_IP_address[609] clifconfig en0 delete 61.81.244.156
+epprd_rg:clifconfig[117] version=1.9
+epprd_rg:clifconfig[121] set -A args en0 delete 61.81.244.156
+epprd_rg:clifconfig[124] interface=en0
+epprd_rg:clifconfig[125] shift
+epprd_rg:clifconfig[127] [[ -n delete ]]
+epprd_rg:clifconfig[130] delete_val=1
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n 61.81.244.156 ]]
+epprd_rg:clifconfig[147] params=' address=61.81.244.156'
+epprd_rg:clifconfig[147] addr=61.81.244.156
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n '' ]]
+epprd_rg:clifconfig[174] [[ -n 1 ]]
+epprd_rg:clifconfig[174] [[ -n epprd_rg ]]
+epprd_rg:clifconfig[175] clwparname epprd_rg
+epprd_rg:clwparname[38] version=1.3.1.1
+epprd_rg:clwparname[44] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clwparname[44] [[ -z '' ]]
+epprd_rg:clwparname[44] exit 0
+epprd_rg:clifconfig[175] WPARNAME=''
+epprd_rg:clifconfig[176] (( 0 == 0 ))
+epprd_rg:clifconfig[176] [[ -n '' ]]
+epprd_rg:clifconfig[218] belongs_to_an_active_wpar 61.81.244.156
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] [[ -z '' ]]
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] return 1
+epprd_rg:clifconfig[218] read wpar_name wpar_if wpar_netmask wpar_broadcast
+epprd_rg:clifconfig[218] IFS='~'
+epprd_rg:clifconfig[219] rc=1
+epprd_rg:clifconfig[221] [[ 1 == 0 ]]
+epprd_rg:clifconfig[275] ifconfig en0 delete 61.81.244.156
+epprd_rg:cl_swap_IP_address[611] [[ 1 == 1 ]]
+epprd_rg:cl_swap_IP_address[613] [[ -n '' ]]
+epprd_rg:cl_swap_IP_address[662] [[ -n 61.81.244.134 ]]
+epprd_rg:cl_swap_IP_address[671] (( 720005 <= 710003 ))
+epprd_rg:cl_swap_IP_address[675] clifconfig en0 alias 61.81.244.134 netmask 255.255.255.0
+epprd_rg:clifconfig[117] version=1.9
+epprd_rg:clifconfig[121] set -A args en0 alias 61.81.244.134 netmask 255.255.255.0
+epprd_rg:clifconfig[124] interface=en0
+epprd_rg:clifconfig[125] shift
+epprd_rg:clifconfig[127] [[ -n alias ]]
+epprd_rg:clifconfig[129] alias_val=1
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n 61.81.244.134 ]]
+epprd_rg:clifconfig[147] params=' address=61.81.244.134'
+epprd_rg:clifconfig[147] addr=61.81.244.134
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n netmask ]]
+epprd_rg:clifconfig[149] params=' address=61.81.244.134 netmask=255.255.255.0'
+epprd_rg:clifconfig[149] shift
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n '' ]]
+epprd_rg:clifconfig[174] [[ -n 1 ]]
+epprd_rg:clifconfig[174] [[ -n epprd_rg ]]
+epprd_rg:clifconfig[175] clwparname epprd_rg
+epprd_rg:clwparname[38] version=1.3.1.1
+epprd_rg:clwparname[44] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clwparname[44] [[ -z '' ]]
+epprd_rg:clwparname[44] exit 0
+epprd_rg:clifconfig[175] WPARNAME=''
+epprd_rg:clifconfig[176] (( 0 == 0 ))
+epprd_rg:clifconfig[176] [[ -n '' ]]
+epprd_rg:clifconfig[218] belongs_to_an_active_wpar 61.81.244.134
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] [[ -z '' ]]
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] return 1
+epprd_rg:clifconfig[218] read wpar_name wpar_if wpar_netmask wpar_broadcast
+epprd_rg:clifconfig[218] IFS='~'
+epprd_rg:clifconfig[219] rc=1
+epprd_rg:clifconfig[221] [[ 1 == 0 ]]
+epprd_rg:clifconfig[275] ifconfig en0 alias 61.81.244.134 netmask 255.255.255.0
+epprd_rg:cl_swap_IP_address[679] /usr/es/sbin/cluster/.restore_routes
+epprd_rg:.restore_routes[+9] date
+epprd_rg:.restore_routes[+9] : Starting /usr/es/sbin/cluster/.restore_routes at Sat Jan 28 18:00:17 KORST 2023
+epprd_rg:.restore_routes[+11] cl_route_change default 127.0.0.1 61.81.244.1 inet
+epprd_rg:.restore_routes[+12] rc=0
+epprd_rg:.restore_routes[+13] [[ 0 != 0 ]]
+epprd_rg:.restore_routes[+19] exit
+epprd_rg:cl_swap_IP_address[680] [[ 0 == 0 ]]
+epprd_rg:cl_swap_IP_address[680] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[681] : Completed /usr/es/sbin/cluster/.restore_routes with return code 0
+epprd_rg:cl_swap_IP_address[682] enable_pmtu_gated
Setting tcp_pmtu_discover to 1
Setting udp_pmtu_discover to 1
+epprd_rg:cl_swap_IP_address[685] hats_adapter_notify en0 -d 61.81.244.156 alias
2023-01-28T18:00:17.226380 hats_adapter_notify
2023-01-28T18:00:17.227598 hats_adapter_notify
+epprd_rg:cl_swap_IP_address[688] check_alias_status en0 61.81.244.156 release
+epprd_rg:cl_swap_IP_address[check_alias_status:108] CH_INTERFACE=en0
+epprd_rg:cl_swap_IP_address[check_alias_status:109] CH_ADDRESS=61.81.244.156
+epprd_rg:cl_swap_IP_address[check_alias_status:110] CH_ACQ_OR_RLSE=release
+epprd_rg:cl_swap_IP_address[check_alias_status:118] IF_IB=en0
+epprd_rg:cl_swap_IP_address[check_alias_status:120] awk '{print index($0, "ib")}'
+epprd_rg:cl_swap_IP_address[check_alias_status:120] echo en0
+epprd_rg:cl_swap_IP_address[check_alias_status:120] IS_IB=0
+epprd_rg:cl_swap_IP_address[check_alias_status:122] [[ 0 != 1 ]]
+epprd_rg:cl_swap_IP_address[check_alias_status:124] clifconfig en0
+epprd_rg:cl_swap_IP_address[check_alias_status:124] fgrep -w 61.81.244.156
+epprd_rg:cl_swap_IP_address[check_alias_status:124] awk '{print $2}'
+epprd_rg:clifconfig[117] version=1.9
+epprd_rg:clifconfig[121] set -A args en0
+epprd_rg:clifconfig[124] interface=en0
+epprd_rg:clifconfig[125] shift
+epprd_rg:clifconfig[127] [[ -n '' ]]
+epprd_rg:clifconfig[174] [[ -n '' ]]
+epprd_rg:clifconfig[218] belongs_to_an_active_wpar
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] [[ -z '' ]]
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] return 1
+epprd_rg:clifconfig[218] read wpar_name wpar_if wpar_netmask wpar_broadcast
+epprd_rg:clifconfig[218] IFS='~'
+epprd_rg:clifconfig[219] rc=1
+epprd_rg:clifconfig[221] [[ 1 == 0 ]]
+epprd_rg:clifconfig[275] ifconfig en0
+epprd_rg:cl_swap_IP_address[check_alias_status:124] ADDR=''
+epprd_rg:cl_swap_IP_address[check_alias_status:129] [ release = acquire ]
+epprd_rg:cl_swap_IP_address[check_alias_status:139] [[ '' == 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[check_alias_status:144] return 0
+epprd_rg:cl_swap_IP_address[689] RC1=0
+epprd_rg:cl_swap_IP_address[690] [[ 0 == 0 ]]
+epprd_rg:cl_swap_IP_address[690] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[693] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[697] amlog_trace '' 'Deliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_swap_IP_address[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_swap_IP_address[amlog_trace:319] cltime
+epprd_rg:cl_swap_IP_address[amlog_trace:319] DATE=2023-01-28T18:00:17.281918
+epprd_rg:cl_swap_IP_address[amlog_trace:320] echo '|2023-01-28T18:00:17.281918|INFO: Deliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_swap_IP_address[701] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[714] flush_arp
+epprd_rg:cl_swap_IP_address[flush_arp:49] arp -an
+epprd_rg:cl_swap_IP_address[flush_arp:49] grep '\?'
+epprd_rg:cl_swap_IP_address[flush_arp:49] tr -d '()'
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.27
61.81.244.27 (61.81.244.27) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.217
61.81.244.217 (61.81.244.217) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.220
61.81.244.220 (61.81.244.220) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.221
61.81.244.221 (61.81.244.221) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.224
61.81.244.224 (61.81.244.224) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.239
61.81.244.239 (61.81.244.239) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.251
61.81.244.251 (61.81.244.251) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.252
61.81.244.252 (61.81.244.252) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.123
61.81.244.123 (61.81.244.123) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.126
61.81.244.126 (61.81.244.126) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.140
61.81.244.140 (61.81.244.140) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.145
61.81.244.145 (61.81.244.145) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.146
61.81.244.146 (61.81.244.146) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.1
61.81.244.1 (61.81.244.1) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.154
61.81.244.154 (61.81.244.154) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:52] return 0
+epprd_rg:cl_swap_IP_address[716] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 fa.e6.13.4e.a9.20 183735413 0 60752089 0 0
en0 1500 61.81.244 61.81.244.134 183735413 0 60752089 0 0
lo0 16896 link#1 34267434 0 34267434 0 0
lo0 16896 127 127.0.0.1 34267434 0 34267434 0 0
lo0 16896 ::1%1 34267434 0 34267434 0 0
+epprd_rg:cl_swap_IP_address[717] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
default 61.81.244.1 UG 1 - en0 0 0
61.81.244.0 61.81.244.134 UHSb 1 - en0 0 0 =>
61.81.244/24 61.81.244.134 U 1 - en0 0 0
61.81.244.134 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.255 61.81.244.134 UHSb 1 - en0 0 0
127/8 127.0.0.1 U 1 - lo0 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+epprd_rg:cl_swap_IP_address[989] no -o ipignoreredirects=0
Setting ipignoreredirects to 0
+epprd_rg:cl_swap_IP_address[992] cl_echo 32 'Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en0 61.81.244.134 61.81.244.156 255.255.255.0. Exit status = 0' /usr/es/sbin/cluster/events/utils/cl_swap_IP_address 'rotating release en0 61.81.244.134 61.81.244.156 255.255.255.0' 0
Jan 28 2023 18:00:17Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating release en0 61.81.244.134 61.81.244.156 255.255.255.0. Exit status = 0+epprd_rg:cl_swap_IP_address[994] date
Sat Jan 28 18:00:17 KORST 2023
+epprd_rg:cl_swap_IP_address[996] exit 0
+epprd_rg:release_service_addr[225] RC=0
+epprd_rg:release_service_addr[227] [[ 0 != 0 ]]
+epprd_rg:release_service_addr[245] cl_RMupdate resource_down All_nonerror_service_addrs release_service_addr
2023-01-28T18:00:17.374980
2023-01-28T18:00:17.379434
+epprd_rg:release_service_addr[249] [[ UNDEFINED != UNDEFINED ]]
+epprd_rg:release_service_addr[252] NSORDER=''
+epprd_rg:release_service_addr[252] export NSORDER
+epprd_rg:release_service_addr[255] exit 0
Jan 28 2023 18:00:17 EVENT COMPLETED: release_service_addr 0
|2023-01-28T18:00:17|22169|EVENT COMPLETED: release_service_addr 0|
+epprd_rg:process_resources[release_service_labels:3129] RC=0
+epprd_rg:process_resources[release_service_labels:3131] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[release_service_labels:3146] (( 0 != 0 ))
+epprd_rg:process_resources[release_service_labels:3152] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
+epprd_rg:process_resources[release_service_labels:3154] return 0
+epprd_rg:process_resources[3412] RC=0
+epprd_rg:process_resources[3413] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:19.622207 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=WPAR ACTION=RELEASE RESOURCE_GROUPS='"epprd_rg' '"'
+epprd_rg:process_resources[1] JOB_TYPE=WPAR
+epprd_rg:process_resources[1] ACTION=RELEASE
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ WPAR == RELEASE ]]
+epprd_rg:process_resources[3360] [[ WPAR == ONLINE ]]
+epprd_rg:process_resources[3492] process_wpars RELEASE
+epprd_rg:process_resources[process_wpars:3265] PS4_FUNC=process_wpars
+epprd_rg:process_resources[process_wpars:3265] typeset PS4_FUNC
+epprd_rg:process_resources[process_wpars:3266] [[ high == high ]]
+epprd_rg:process_resources[process_wpars:3266] set -x
+epprd_rg:process_resources[process_wpars:3267] STAT=0
+epprd_rg:process_resources[process_wpars:3268] action=RELEASE
+epprd_rg:process_resources[process_wpars:3268] typeset action
+epprd_rg:process_resources[process_wpars:3272] export GROUPNAME
+epprd_rg:process_resources[process_wpars:3280] clstop_wpar
+epprd_rg:clstop_wpar[42] version=1.7
+epprd_rg:clstop_wpar[46] [[ rg_move == reconfig_resource_release ]]
+epprd_rg:clstop_wpar[46] [[ RELEASE_PRIMARY == reconfig_resource_release ]]
+epprd_rg:clstop_wpar[55] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clstop_wpar[55] [[ -z '' ]]
+epprd_rg:clstop_wpar[55] exit 0
+epprd_rg:process_resources[process_wpars:3281] RC=0
+epprd_rg:process_resources[process_wpars:3285] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[process_wpars:3294] return 0
+epprd_rg:process_resources[3493] RC=0
+epprd_rg:process_resources[3495] [[ RELEASE == RELEASE ]]
+epprd_rg:process_resources[3497] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:19.660112 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=OFFLINE RESOURCE_GROUPS='"epprd_rg"'
+epprd_rg:process_resources[1] JOB_TYPE=OFFLINE
+epprd_rg:process_resources[1] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ OFFLINE == RELEASE ]]
+epprd_rg:process_resources[3360] [[ OFFLINE == ONLINE ]]
+epprd_rg:process_resources[3681] set_resource_group_state DOWN
+epprd_rg:process_resources[set_resource_group_state:82] PS4_FUNC=set_resource_group_state
+epprd_rg:process_resources[set_resource_group_state:82] typeset PS4_FUNC
+epprd_rg:process_resources[set_resource_group_state:83] [[ high == high ]]
+epprd_rg:process_resources[set_resource_group_state:83] set -x
+epprd_rg:process_resources[set_resource_group_state:84] STAT=0
+epprd_rg:process_resources[set_resource_group_state:85] new_status=DOWN
+epprd_rg:process_resources[set_resource_group_state:89] export GROUPNAME
+epprd_rg:process_resources[set_resource_group_state:90] [[ DOWN != DOWN ]]
+epprd_rg:process_resources[set_resource_group_state:100] : Resource Manager Updates
+epprd_rg:process_resources[set_resource_group_state:122] cl_RMupdate rg_down epprd_rg process_resources
2023-01-28T18:00:19.695964
2023-01-28T18:00:19.700081
+epprd_rg:process_resources[set_resource_group_state:124] amlog_trace '' 'acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:318] clcycle clavailability.log
+epprd_rg:process_resources[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:process_resources[amlog_trace:319] cltime
+epprd_rg:process_resources[amlog_trace:319] DATE=2023-01-28T18:00:19.731764
+epprd_rg:process_resources[amlog_trace:320] echo '|2023-01-28T18:00:19.731764|INFO: acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:process_resources[set_resource_group_state:153] return 0
+epprd_rg:process_resources[3682] RC=0
+epprd_rg:process_resources[3683] postvg_for_rdisk
+epprd_rg:process_resources[postvg_for_rdisk:856] PS4_FUNC=postvg_for_rdisk
+epprd_rg:process_resources[postvg_for_rdisk:856] typeset PS4_FUNC
+epprd_rg:process_resources[postvg_for_rdisk:857] [[ high == high ]]
+epprd_rg:process_resources[postvg_for_rdisk:857] set -x
+epprd_rg:process_resources[postvg_for_rdisk:858] STAT=0
+epprd_rg:process_resources[postvg_for_rdisk:859] FAILURE_IN_METHOD=0
+epprd_rg:process_resources[postvg_for_rdisk:859] typeset -li FAILURE_IN_METHOD
+epprd_rg:process_resources[postvg_for_rdisk:860] LIST_OF_FAILED_RGS=''
+epprd_rg:process_resources[postvg_for_rdisk:861] RG_LIST=epprd_rg
+epprd_rg:process_resources[postvg_for_rdisk:862] RDISK_LIST=''
+epprd_rg:process_resources[postvg_for_rdisk:863] DISK_LIST=''
+epprd_rg:process_resources[postvg_for_rdisk:866] : Resource groups are processed individually. This is required because
+epprd_rg:process_resources[postvg_for_rdisk:867] : the replication mechanism may differ between resource groups.
+epprd_rg:process_resources[postvg_for_rdisk:871] getReplicatedResources epprd_rg
+epprd_rg:process_resources[getReplicatedResources:699] PS4_FUNC=getReplicatedResources
+epprd_rg:process_resources[getReplicatedResources:699] typeset PS4_FUNC
+epprd_rg:process_resources[getReplicatedResources:700] [[ high == high ]]
+epprd_rg:process_resources[getReplicatedResources:700] set -x
+epprd_rg:process_resources[getReplicatedResources:702] RV=false
+epprd_rg:process_resources[getReplicatedResources:704] clodmget -n -f type HACMPrresmethods
+epprd_rg:process_resources[getReplicatedResources:704] [[ -n 9 ]]
+epprd_rg:process_resources[getReplicatedResources:707] : Replicated resource methods are defined, check for resources
+epprd_rg:process_resources[getReplicatedResources:709] clodmget -q $'name like \'*_REP_RESOURCE\' AND group=epprd_rg' -f value -n HACMPresource
+epprd_rg:process_resources[getReplicatedResources:709] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:718] : Verify if any backup profiles are configured and trigger cbm utilities based on that
+epprd_rg:process_resources[getReplicatedResources:720] clodmget -q name=BACKUP_ENABLED -f value HACMPresource
+epprd_rg:process_resources[getReplicatedResources:720] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:739] echo false
+epprd_rg:process_resources[postvg_for_rdisk:871] REPLICATED_RESOURCES=false
+epprd_rg:process_resources[postvg_for_rdisk:873] [[ false == true ]]
+epprd_rg:process_resources[postvg_for_rdisk:946] return 0
+epprd_rg:process_resources[3684] (( 0 != 0 ))
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:00:19.756044 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=NONE
+epprd_rg:process_resources[1] JOB_TYPE=NONE
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ NONE == RELEASE ]]
+epprd_rg:process_resources[3360] [[ NONE == ONLINE ]]
+epprd_rg:process_resources[3729] break
+epprd_rg:process_resources[3740] : If sddsrv was turned off above, turn it back on again
+epprd_rg:process_resources[3742] [[ FALSE == TRUE ]]
+epprd_rg:process_resources[3747] exit 0
:rg_move[247] : unsetting AM_SYNC_CALLED_BY from $'callers environment as\n: we dont' require it after this point in execution.
:rg_move[250] unset AM_SYNC_CALLED_BY
:rg_move[253] [[ -f /tmp/.NFSSTOPPED ]]
:rg_move[274] [[ -f /tmp/.RPCLOCKDSTOPPED ]]
:rg_move[293] exit 0
Jan 28 2023 18:00:19 EVENT COMPLETED: rg_move epprda 1 RELEASE 0
|2023-01-28T18:00:19|22169|EVENT COMPLETED: rg_move epprda 1 RELEASE 0|
:clevlog[amlog_trace:318] clcycle clavailability.log
:clevlog[amlog_trace:318] 1> /dev/null 2>& 1
:clevlog[amlog_trace:319] cltime
:clevlog[amlog_trace:319] DATE=2023-01-28T18:00:19.850501
:clevlog[amlog_trace:320] echo '|2023-01-28T18:00:19.850501|INFO: rg_move|epprd_rg|epprda|1|RELEASE|0'
:clevlog[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
:rg_move_release[+68] exit 0
Jan 28 2023 18:00:19 EVENT COMPLETED: rg_move_release epprda 1 0
|2023-01-28T18:00:19|22169|EVENT COMPLETED: rg_move_release epprda 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:19.975887
+ echo '|2023-01-28T18:00:19.975887|INFO: rg_move_release|epprd_rg|epprda|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 18:00:22 EVENT START: rg_move_fence epprda 1
|2023-01-28T18:00:22|22169|EVENT START: rg_move_fence epprda 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:22.172081
+ echo '|2023-01-28T18:00:22.172081|INFO: rg_move_fence|epprd_rg|epprda|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_fence[62] [[ high == high ]]
:rg_move_fence[62] version=1.11
:rg_move_fence[63] NODENAME=epprda
:rg_move_fence[63] export NODENAME
:rg_move_fence[65] set -u
:rg_move_fence[67] [ 2 != 2 ]
:rg_move_fence[73] set +u
:rg_move_fence[75] [[ -z TRUE ]]
:rg_move_fence[80] [[ TRUE == TRUE ]]
:rg_move_fence[82] LOCAL_NODENAME=epprda
:rg_move_fence[83] odmget -qid=1 HACMPgroup
:rg_move_fence[83] egrep 'group ='
:rg_move_fence[83] awk '{print $3}'
:rg_move_fence[83] eval RGNAME='"epprd_rg"'
:rg_move_fence[1] RGNAME=epprd_rg
+epprd_rg:rg_move_fence[84] GROUPNAME=epprd_rg
+epprd_rg:rg_move_fence[85] group_state='$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[86] set +u
+epprd_rg:rg_move_fence[87] eval print '$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[1] print ONLINE
+epprd_rg:rg_move_fence[87] RG_MOVE_ONLINE=ONLINE
+epprd_rg:rg_move_fence[87] export RG_MOVE_ONLINE
+epprd_rg:rg_move_fence[88] set -u
+epprd_rg:rg_move_fence[89] RG_MOVE_ONLINE=ONLINE
+epprd_rg:rg_move_fence[91] set -a
+epprd_rg:rg_move_fence[92] clsetenvgrp epprda rg_move epprd_rg ''
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
+epprd_rg:rg_move_fence[92] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[93] RC=0
+epprd_rg:rg_move_fence[94] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[1] FORCEDOWN_GROUPS=''
+epprd_rg:rg_move_fence[2] RESOURCE_GROUPS=''
+epprd_rg:rg_move_fence[3] HOMELESS_GROUPS=''
+epprd_rg:rg_move_fence[4] HOMELESS_FOLLOWER_GROUPS=''
+epprd_rg:rg_move_fence[5] ERRSTATE_GROUPS=''
+epprd_rg:rg_move_fence[6] PRINCIPAL_ACTIONS=''
+epprd_rg:rg_move_fence[7] ASSOCIATE_ACTIONS=''
+epprd_rg:rg_move_fence[8] AUXILLIARY_ACTIONS=''
+epprd_rg:rg_move_fence[8] SIBLING_GROUPS=''
+epprd_rg:rg_move_fence[9] SIBLING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[10] SIBLING_ACQUIRING_GROUPS=''
+epprd_rg:rg_move_fence[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[12] SIBLING_RELEASING_GROUPS=''
+epprd_rg:rg_move_fence[13] SIBLING_RELEASING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[95] set +a
+epprd_rg:rg_move_fence[96] [ 0 -ne 0 ]
+epprd_rg:rg_move_fence[103] process_resources FENCE
:rg_move_fence[3318] version=1.169
:rg_move_fence[3321] STATUS=0
:rg_move_fence[3322] sddsrv_off=FALSE
:rg_move_fence[3324] true
:rg_move_fence[3326] : call rgpa, and it will tell us what to do next
:rg_move_fence[3328] set -a
:rg_move_fence[3329] clRGPA FENCE
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa FENCE
2023-01-28T18:00:22.277569 clrgpa
:clRGPA[+55] exit 0
:rg_move_fence[3329] eval JOB_TYPE=NONE
:rg_move_fence[1] JOB_TYPE=NONE
:rg_move_fence[3330] RC=0
:rg_move_fence[3331] set +a
:rg_move_fence[3333] (( 0 != 0 ))
:rg_move_fence[3342] RESOURCE_GROUPS=''
:rg_move_fence[3343] GROUPNAME=''
:rg_move_fence[3343] export GROUPNAME
:rg_move_fence[3353] IS_SERVICE_START=1
:rg_move_fence[3354] IS_SERVICE_STOP=1
:rg_move_fence[3360] [[ NONE == RELEASE ]]
:rg_move_fence[3360] [[ NONE == ONLINE ]]
:rg_move_fence[3729] break
:rg_move_fence[3740] : If sddsrv was turned off above, turn it back on again
:rg_move_fence[3742] [[ FALSE == TRUE ]]
:rg_move_fence[3747] exit 0
+epprd_rg:rg_move_fence[104] : exit status of process_resources FENCE is: 0
+epprd_rg:rg_move_fence[107] [[ TRUE == TRUE ]]
+epprd_rg:rg_move_fence[109] export EVENT_TYPE
+epprd_rg:rg_move_fence[110] echo RELEASE_PRIMARY
RELEASE_PRIMARY
+epprd_rg:rg_move_fence[111] [[ -n '' ]]
+epprd_rg:rg_move_fence[141] exit 0
Jan 28 2023 18:00:22 EVENT COMPLETED: rg_move_fence epprda 1 0
|2023-01-28T18:00:22|22169|EVENT COMPLETED: rg_move_fence epprda 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:22.370858
+ echo '|2023-01-28T18:00:22.370858|INFO: rg_move_fence|epprd_rg|epprda|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
PowerHA SystemMirror Event Summary
----------------------------------------------------------------------------
Serial number for this event: 22169
Event: TE_RG_MOVE_RELEASE
Start time: Sat Jan 28 18:00:05 2023
End time: Sat Jan 28 18:00:22 2023
Action: Resource: Script Name:
----------------------------------------------------------------------------
Releasing resource group: epprd_rg process_resources
Search on: Sat.Jan.28.18:00:06.KORST.2023.process_resources.epprd_rg.ref
Releasing resource: All_servers stop_server
Search on: Sat.Jan.28.18:00:06.KORST.2023.stop_server.All_servers.epprd_rg.ref
Resource offline: All_nonerror_servers stop_server
Search on: Sat.Jan.28.18:00:06.KORST.2023.stop_server.All_nonerror_servers.epprd_rg.ref
Releasing resource: All_nfs_mounts cl_deactivate_nfs
Search on: Sat.Jan.28.18:00:07.KORST.2023.cl_deactivate_nfs.All_nfs_mounts.epprd_rg.ref
Resource offline: All_nonerror_nfs_mounts cl_deactivate_nfs
Search on: Sat.Jan.28.18:00:11.KORST.2023.cl_deactivate_nfs.All_nonerror_nfs_mounts.epprd_rg.ref
Releasing resource: All_exports cl_unexport_fs
Search on: Sat.Jan.28.18:00:11.KORST.2023.cl_unexport_fs.All_exports.epprd_rg.ref
Resource offline: All_nonerror_exports cl_unexport_fs
Search on: Sat.Jan.28.18:00:11.KORST.2023.cl_unexport_fs.All_nonerror_exports.epprd_rg.ref
Releasing resource: All_filesystems cl_deactivate_fs
Search on: Sat.Jan.28.18:00:11.KORST.2023.cl_deactivate_fs.All_filesystems.epprd_rg.ref
Resource offline: All_non_error_filesystems cl_deactivate_fs
Search on: Sat.Jan.28.18:00:15.KORST.2023.cl_deactivate_fs.All_non_error_filesystems.epprd_rg.ref
Releasing resource: All_volume_groups cl_deactivate_vgs
Search on: Sat.Jan.28.18:00:15.KORST.2023.cl_deactivate_vgs.All_volume_groups.epprd_rg.ref
Resource offline: All_nonerror_volume_groups cl_deactivate_vgs
Search on: Sat.Jan.28.18:00:16.KORST.2023.cl_deactivate_vgs.All_nonerror_volume_groups.epprd_rg.ref
Releasing resource: All_service_addrs release_service_addr
Search on: Sat.Jan.28.18:00:16.KORST.2023.release_service_addr.All_service_addrs.epprd_rg.ref
Resource offline: All_nonerror_service_addrs release_service_addr
Search on: Sat.Jan.28.18:00:17.KORST.2023.release_service_addr.All_nonerror_service_addrs.epprd_rg.ref
Resource group offline: epprd_rg process_resources
Search on: Sat.Jan.28.18:00:19.KORST.2023.process_resources.epprd_rg.ref
----------------------------------------------------------------------------
|EVENT_SUMMARY_START|TE_RG_MOVE_RELEASE|2023-01-28T18:00:05|2023-01-28T18:00:22|22169|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:06.KORST.2023.process_resources.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:06.KORST.2023.stop_server.All_servers.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:06.KORST.2023.stop_server.All_nonerror_servers.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:07.KORST.2023.cl_deactivate_nfs.All_nfs_mounts.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:11.KORST.2023.cl_deactivate_nfs.All_nonerror_nfs_mounts.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:11.KORST.2023.cl_unexport_fs.All_exports.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:11.KORST.2023.cl_unexport_fs.All_nonerror_exports.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:11.KORST.2023.cl_deactivate_fs.All_filesystems.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:15.KORST.2023.cl_deactivate_fs.All_non_error_filesystems.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:15.KORST.2023.cl_deactivate_vgs.All_volume_groups.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:16.KORST.2023.cl_deactivate_vgs.All_nonerror_volume_groups.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:16.KORST.2023.release_service_addr.All_service_addrs.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:17.KORST.2023.release_service_addr.All_nonerror_service_addrs.epprd_rg.ref.ref|
|EV_SUM_SEARCHON_STR|Sat.Jan.28.18:00:19.KORST.2023.process_resources.epprd_rg.ref.ref|
|EVENT_SUMMARY_END|
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 22177
No resource state change initiated by the cluster manager as a result of this event
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_JOIN_NETWORK|2023-01-28T18:00:26|22177|
|EVENT_NO_ACTIONS_QUEUED|
|EVENT_PREAMBLE_END|
Jan 28 2023 18:00:26 EVENT START: network_up epprda net_ether_01
|2023-01-28T18:00:26|22177|EVENT START: network_up epprda net_ether_01|
:network_up[+66] version=%I%
:network_up[+69] set -a
:network_up[+70] cllsparam -n epprda
:network_up[+70] eval NODE_NAME=epprda VERBOSE_LOGGING=high PS4='${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}${KSH_VERSION:+${.sh.fun:+${.sh.fun}:}}$LINENO] ' DEBUG_LEVEL=Standard LC_ALL='C'
:network_up[+70] NODE_NAME=epprda VERBOSE_LOGGING=high PS4=${GROUPNAME:++$GROUPNAME}:${PROGNAME:-${0##*/}}${PS4_TIMER:+($SECONDS)}${PS4_LOOP:+:$PS4_LOOP}[${ERRNO:+${PS4_FUNC:-}+}${KSH_VERSION:+${.sh.fun:+${.sh.fun}:}}$LINENO] DEBUG_LEVEL=Standard LC_ALL=C
:network_up[+71] set +a
:network_up[+73] STATUS=0
:network_up[+75] [ 2 -ne 2 ]
:network_up[+81] [[ epprda == epprda ]]
:network_up[+82] amlog_trace 22177|epprda|net_ether_01
:network_up[+61] clcycle clavailability.log
:network_up[+61] 1> /dev/null 2>& 1
:network_up[+61] :network_up[+61] cltime
DATE=2023-01-28T18:00:26.950383
:network_up[+61] echo |2023-01-28T18:00:26.950383|INFO: 22177|epprda|net_ether_01
:network_up[+61] 1>> /var/hacmp/availability/clavailability.log
:network_up[+84] export NETWORKNAME=net_ether_01
:network_up[+89] [[ epprda == epprda ]]
:network_up[+90] amlog_trace 22177|epprda|net_ether_01
:network_up[+61] clcycle clavailability.log
:network_up[+61] 1> /dev/null 2>& 1
:network_up[+61] :network_up[+61] cltime
DATE=2023-01-28T18:00:26.977266
:network_up[+61] echo |2023-01-28T18:00:26.977266|INFO: 22177|epprda|net_ether_01
:network_up[+61] 1>> /var/hacmp/availability/clavailability.log
:network_up[+92] exit 0
Jan 28 2023 18:00:26 EVENT COMPLETED: network_up epprda net_ether_01 0
|2023-01-28T18:00:27|22177|EVENT COMPLETED: network_up epprda net_ether_01 0|
Jan 28 2023 18:00:27 EVENT START: network_up_complete epprda net_ether_01
|2023-01-28T18:00:27|22177|EVENT START: network_up_complete epprda net_ether_01|
:network_up_complete[+68] version=%I%
:network_up_complete[+72] [ 2 -ne 2 ]
:network_up_complete[+78] [[ epprda == epprda ]]
:network_up_complete[+79] amlog_trace 22177|epprda|net_ether_01
:network_up_complete[+61] clcycle clavailability.log
:network_up_complete[+61] 1> /dev/null 2>& 1
:network_up_complete[+61] :network_up_complete[+61] cltime
DATE=2023-01-28T18:00:27.237634
:network_up_complete[+61] echo |2023-01-28T18:00:27.237634|INFO: 22177|epprda|net_ether_01
:network_up_complete[+61] 1>> /var/hacmp/availability/clavailability.log
:network_up_complete[+82] NODENAME=epprda
:network_up_complete[+83] NETWORK=net_ether_01
:network_up_complete[+84] export NETWORKNAME=net_ether_01
:network_up_complete[+86] [[ -z ]]
:network_up_complete[+88] EMULATE=REAL
:network_up_complete[+90] set -u
:network_up_complete[+96] STATUS=0
:network_up_complete[+100] odmget HACMPnode
:network_up_complete[+100] grep name =
:network_up_complete[+100] sort
:network_up_complete[+100] uniq
:network_up_complete[+100] wc -l
:network_up_complete[+100] [ 2 -eq 2 ]
:network_up_complete[+102] :network_up_complete[+102] odmget HACMPgroup
:network_up_complete[+102] grep group =
:network_up_complete[+102] awk {print $3}
:network_up_complete[+102] sed s/"//g
RESOURCE_GROUPS=epprd_rg
:network_up_complete[+106] :network_up_complete[+106] odmget -q group=epprd_rg AND name=EXPORT_FILESYSTEM HACMPresource
:network_up_complete[+106] grep value
:network_up_complete[+106] awk {print $3}
:network_up_complete[+106] sed s/"//g
EXPORTLIST=/board_org
:network_up_complete[+107] [ -n /board_org ]
:network_up_complete[+109] [ REAL = EMUL ]
:network_up_complete[+114] cl_update_statd
:cl_update_statd(0)[+174] version=%I%
:cl_update_statd(0)[+176] typeset -i RC=0
:cl_update_statd(0)[+178] LOCAL_FOUND=
:cl_update_statd(0)[+179] TWIN_NAME=
:cl_update_statd(0)[+180] [[ -z epprda ]]
:cl_update_statd(0)[+181] :cl_update_statd(0)[+181] cl_get_path -S
OP_SEP=~
:cl_update_statd(0)[+182] set -u
:cl_update_statd(0)[+187] LOCAL_FOUND=true
:cl_update_statd(0)[+189] TWIN_NAME=epprds
:cl_update_statd(0)[+194] : Make sure statd is running locally
:cl_update_statd(0)[+196] lssrc -s statd
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw inoperative
:cl_update_statd(0)[+196] rpcinfo -p
:cl_update_statd(0)[+196] LC_ALL=C
:cl_update_statd(0)[+196] grep -qw status
:cl_update_statd(0)[+207] : Get the current twin, if there is one
:cl_update_statd(0)[+209] :cl_update_statd(0)[+209] nfso -H sm_gethost
:cl_update_statd(0)[+209] 2>& 1
CURTWIN=epprds
:cl_update_statd(0)[+210] RC=0
:cl_update_statd(0)[+212] [[ -z true ]]
:cl_update_statd(0)[+212] [[ -z epprds ]]
:cl_update_statd(0)[+225] : Get the interface to the twin node
:cl_update_statd(0)[+227] :cl_update_statd(0)[+227] get_node_ip epprds
:cl_update_statd(0)[+9] (( 1 != 1 ))
:cl_update_statd(0)[+15] Twin_Name=epprds
:cl_update_statd(0)[+16] NewTwin=
:cl_update_statd(0)[+19] : Get the Interface details for every interface on the twin node
:cl_update_statd(0)[+20] : Reject interfaces on nodes that are not public boot addresses
:cl_update_statd(0)[+21] : because those are the only ones we have state information for
:cl_update_statd(0)[+23] :cl_update_statd(0)[+23] cllsif -J ~ -Sw -i epprda
:cl_update_statd(0)[+23] LC_ALL=C
LOCAL_NETWORK_INFO=epprda~boot~net_ether_01~ether~public~epprda~61.81.244.134~~en0~~255.255.255.0~~~24~AF_INET
epprd~service~net_ether_01~ether~public~epprda~61.81.244.156~~~~255.255.255.0~~ignore~24~AF_INET
:cl_update_statd(0)[+24] cllsif -J ~ -Sw -i epprds
:cl_update_statd(0)[+24] LC_ALL=C
:cl_update_statd(0)[+25] read adapt type network net_type attrib node ip_addr skip interface skip netmask skip skip prefix ip_family
:cl_update_statd(0)[+25] IFS=~
:cl_update_statd(0)[+25] [[ public != public ]]
:cl_update_statd(0)[+25] [[ boot != boot ]]
:cl_update_statd(0)[+33] : Find the state of this candidate
:cl_update_statd(0)[+33] [[ AF_INET == AF_INET ]]
:cl_update_statd(0)[+37] :cl_update_statd(0)[+37] print 61.81.244.123
:cl_update_statd(0)[+37] tr ./ xx
addr=i61x81x244x123_epprds
:cl_update_statd(0)[+43] eval candidate_state=${i61x81x244x123_epprds:-down}
:cl_update_statd(0)[+43] candidate_state=UP
:cl_update_statd(0)[+46] : If state is UP, check to see if this node can talk to it
:cl_update_statd(0)[+46] [[ UP == UP ]]
:cl_update_statd(0)[+50] ping -w 5 -c 1 -q 61.81.244.123
:cl_update_statd(0)[+50] 1> /dev/null
:cl_update_statd(0)[+61] echo epprda~boot~net_ether_01~ether~public~epprda~61.81.244.134~~en0~~255.255.255.0~~~24~AF_INET epprd~service~net_ether_01~ether~public~epprda~61.81.244.156~~~~255.255.255.0~~ignore~24~AF_INET
:cl_update_statd(0)[+61] tr \n
:cl_update_statd(0)[+62] read lcl_adapt lcl_type lcl_network lcl_net_type lcl_attrib lcl_node lcl_ip_addr skip lcl_interface skip lcl_netmask skip skip lcl_prefix lcl_ip_family
:cl_update_statd(0)[+62] IFS=~
:cl_update_statd(0)[+62] [[ net_ether_01 != net_ether_01 ]]
:cl_update_statd(0)[+62] [[ boot != boot ]]
:cl_update_statd(0)[+62] [[ public != public ]]
:cl_update_statd(0)[+62] [[ AF_INET != AF_INET ]]
:cl_update_statd(0)[+62] [[ AF_INET == AF_INET ]]
:cl_update_statd(0)[+71] :cl_update_statd(0)[+71] print 61.81.244.134
:cl_update_statd(0)[+71] tr ./ xx
addr=i61x81x244x134_epprda
:cl_update_statd(0)[+77] eval lcl_candidate_state=${i61x81x244x134_epprda:-down}
:cl_update_statd(0)[+77] lcl_candidate_state=UP
:cl_update_statd(0)[+77] [[ UP == UP ]]
:cl_update_statd(0)[+81] : epprds is on the same network as an interface that is up
:cl_update_statd(0)[+82] : on the local node, and the attributes match.
:cl_update_statd(0)[+84] NewTwin=epprds
:cl_update_statd(0)[+85] break
:cl_update_statd(0)[+85] [[ -n epprds ]]
:cl_update_statd(0)[+91] break
:cl_update_statd(0)[+91] [[ -z epprds ]]
:cl_update_statd(0)[+100] echo epprds
:cl_update_statd(0)[+101] return 0
NEWTWIN=epprds
:cl_update_statd(0)[+227] [[ -z epprds ]]
:cl_update_statd(0)[+227] [[ epprds != epprds ]]
:cl_update_statd(0)[+259] : RC is actually 0
:cl_update_statd(0)[+266] return 0
:network_up_complete[+115] [ 0 -ne 0 ]
:network_up_complete[+120] break
:network_up_complete[+125] [[ epprda == epprda ]]
:network_up_complete[+131] :network_up_complete[+131] odmget -qname=net_ether_01 HACMPnetwork
:network_up_complete[+131] awk $1 == "alias" {print $3}
:network_up_complete[+131] sed s/"//g
ALIASING=1
:network_up_complete[+131] [[ 1 == 1 ]]
:network_up_complete[+133] cl_configure_persistent_address aliasing_network_up -n net_ether_01
:cl_configure_persistent_address[1344] version=1.56.1.4
:cl_configure_persistent_address[1346] cl_get_path -S
:cl_configure_persistent_address[1346] OP_SEP='~'
:cl_configure_persistent_address[1349] get_local_nodename
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
:get_local_nodename[56] nodename=epprda
:get_local_nodename[57] rc=0
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 0 != 0 ))
:get_local_nodename[61] : If the node name in HACMPcluster matches a configured node, we are done.
:get_local_nodename[63] clnodename
:get_local_nodename[63] grep -w epprda
:get_local_nodename[63] [[ -n epprda ]]
:get_local_nodename[65] print -- epprda
:get_local_nodename[66] exit 0
:cl_configure_persistent_address[1349] LOCALNODENAME=epprda
:cl_configure_persistent_address[1354] [[ -z epprda ]]
:cl_configure_persistent_address[1356] NETWORK=''
:cl_configure_persistent_address[1357] ALIVE_IF=''
:cl_configure_persistent_address[1358] FAILED_IF=''
:cl_configure_persistent_address[1359] FAILED_ADDRESS=''
:cl_configure_persistent_address[1360] UPDATE_CLSTRMGR=1
:cl_configure_persistent_address[1361] CHECK_HA_ALIVE=1
:cl_configure_persistent_address[1362] RESTORE_ROUTES=/usr/es/sbin/cluster/.pers_restore_routes
:cl_configure_persistent_address[1363] RC=0
:cl_configure_persistent_address[1364] B_FLAG=0
:cl_configure_persistent_address[1366] ACTION=aliasing_network_up
:cl_configure_persistent_address[1367] shift
:cl_configure_persistent_address[1369] getopt n:a:f:i:dPB -n net_ether_01
:cl_configure_persistent_address[1369] set -- -n net_ether_01 --
:cl_configure_persistent_address[1371] (( 0 != 0 ))
:cl_configure_persistent_address[1371] [[ -z aliasing_network_up ]]
:cl_configure_persistent_address[1376] [[ -n != -- ]]
:cl_configure_persistent_address[1379] NETWORK=net_ether_01
:cl_configure_persistent_address[1380] shift
:cl_configure_persistent_address[1380] shift
:cl_configure_persistent_address[1376] [[ -- != -- ]]
:cl_configure_persistent_address[1418] shift
:cl_configure_persistent_address[1422] [[ aliasing_network_up == up ]]
:cl_configure_persistent_address[1520] [[ aliasing_network_up == swap ]]
:cl_configure_persistent_address[1667] [[ aliasing_network_up == fail_boot ]]
:cl_configure_persistent_address[1830] [[ aliasing_network_up == aliasing_network_up ]]
:cl_configure_persistent_address[1831] [[ -z net_ether_01 ]]
:cl_configure_persistent_address[1837] isAliasingNetwork net_ether_01
:cl_configure_persistent_address[isAliasingNetwork:386] PS4_FUNC=isAliasingNetwork
:cl_configure_persistent_address[isAliasingNetwork:386] typeset PS4_FUNC
:cl_configure_persistent_address[isAliasingNetwork:387] [[ high == high ]]
:cl_configure_persistent_address[isAliasingNetwork:387] set -x
:cl_configure_persistent_address[isAliasingNetwork:389] NETWORK=net_ether_01
:cl_configure_persistent_address[isAliasingNetwork:391] odmget -qname=net_ether_01 HACMPnetwork
:cl_configure_persistent_address[isAliasingNetwork:392] awk '$1 == "alias" {print $3}'
:cl_configure_persistent_address[isAliasingNetwork:393] sed 's/"//g'
:cl_configure_persistent_address[isAliasingNetwork:391] print 1
:cl_configure_persistent_address[1837] [[ 1 != 1 ]]
:cl_configure_persistent_address[1842] cllsif -J '~' -Spi epprda
:cl_configure_persistent_address[1842] awk -F~ '$2 == "persistent" && $3 == "net_ether_01" {print $1}'
:cl_configure_persistent_address[1842] PERSISTENT=''
:cl_configure_persistent_address[1844] [[ -z '' ]]
:cl_configure_persistent_address[1846] exit 0
:network_up_complete[+141] :network_up_complete[+141] cl_rrmethods2call net_initialization
:cl_rrmethods2call[56] version=%I%
:cl_rrmethods2call[84] RRMETHODS=''
:cl_rrmethods2call[85] NEED_RR_ENV_VARS=no
:cl_rrmethods2call[90] : The network methods are returned if the Network type is XD_data.
:cl_rrmethods2call[92] clodmget -qname=net_ether_01 -f nimname -n HACMPnetwork
:cl_rrmethods2call[92] RRNET=ether
:cl_rrmethods2call[94] [[ ether == XD_data ]]
:cl_rrmethods2call[98] return 0
METHODS=
:network_up_complete[+163] :network_up_complete[+163] clodmget -n -q name=MOUNT_FILESYSTEM -f group HACMPresource
CROSSMOUNTS=epprd_rg
:network_up_complete[+165] [ -n epprd_rg -a epprda = epprda ]
:network_up_complete[+168] : Remount any NFS cross mount if required
:network_up_complete[+174] :network_up_complete[+174] clodmget -n -f group HACMPgroup
RESOURCE_GROUPS=epprd_rg
:network_up_complete[+185] :network_up_complete[+185] clodmget -n -q name=MOUNT_FILESYSTEM and group=epprd_rg -f value HACMPresource
MOUNT_FILESYSTEM=/board;/board_org
:network_up_complete[+185] [[ -z /board;/board_org ]]
:network_up_complete[+189] IN_RG=false
:network_up_complete[+189] clodmget -n -q group=epprd_rg -f nodes HACMPgroup
:network_up_complete[+189] [[ epprda == epprda ]]
:network_up_complete[+192] IN_RG=true
:network_up_complete[+192] [[ epprds == epprda ]]
:network_up_complete[+192] [[ true == false ]]
:network_up_complete[+197] :network_up_complete[+197] clRGinfo -s epprd_rg
:network_up_complete[+197] awk -F : { if ( $2 == "ONLINE" ) print $3 }
clRGinfo[431]: version I
clRGinfo[517]: Number of resource groups = 1
clRGinfo[562]: cluster epprda_cluster is version = 22
clRGinfo[1439]: IPC target host name is 'localhost'
clRGinfo[685]: Current group is 'epprd_rg'
get primary state info for state 4
get secondary state info for state 4
getPrimaryStateStr: using primary_table => primary_state_for_short_output_table
get primary state info for state 4
get secondary state info for state 4
getPreviousStateString: Primary=4, Sec=-1
get primary state info for state 4
get secondary state info for state 4
getPrimaryStateStr: using primary_table => primary_state_for_short_output_table
get primary state info for state 4
get secondary state info for state 4
getPreviousStateString: Primary=4, Sec=-1
NFS_HOST=
:network_up_complete[+197] [[ -z ]]
:network_up_complete[+198] continue
:network_up_complete[+257] [[ epprda == epprda ]]
:network_up_complete[+257] [[ 0 -ne 0 ]]
:network_up_complete[+262] amlog_trace 22177|epprda|net_ether_01
:network_up_complete[+61] clcycle clavailability.log
:network_up_complete[+61] 1> /dev/null 2>& 1
:network_up_complete[+61] :network_up_complete[+61] cltime
DATE=2023-01-28T18:00:27.434191
:network_up_complete[+61] echo |2023-01-28T18:00:27.434191|INFO: 22177|epprda|net_ether_01
:network_up_complete[+61] 1>> /var/hacmp/availability/clavailability.log
:network_up_complete[+265] exit 0
Jan 28 2023 18:00:27 EVENT COMPLETED: network_up_complete epprda net_ether_01 0
|2023-01-28T18:00:27|22177|EVENT COMPLETED: network_up_complete epprda net_ether_01 0|
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 22170
No resource state change initiated by the cluster manager as a result of this event
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_FAIL_NODE_DEP_COMPLETE|2023-01-28T18:00:29|22170|
|EVENT_NO_ACTIONS_QUEUED|
|EVENT_PREAMBLE_END|
Jan 28 2023 18:00:29 EVENT START: node_down_complete epprda
|2023-01-28T18:00:29|22170|EVENT START: node_down_complete epprda|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:29.686321
+ echo '|2023-01-28T18:00:29.686321|INFO: node_down_complete|epprda'
+ 1>> /var/hacmp/availability/clavailability.log
:node_down_complete[107] version=%I%
:node_down_complete[111] : Pick up input
:node_down_complete[113] NODENAME=epprda
:node_down_complete[113] export NODENAME
:node_down_complete[114] PARAM=''
:node_down_complete[114] export PARAM
:node_down_complete[116] NODE_HALT_CONTROL_FILE=/usr/es/sbin/cluster/etc/ha_nodehalt.lock
:node_down_complete[125] STATUS=0
:node_down_complete[127] set -u
:node_down_complete[129] (( 1 < 1 ))
:node_down_complete[136] : serial number for this event is 22170
:node_down_complete[139] [[ '' == forced ]]
:node_down_complete[151] : if RG_DEPENDENCIES is set to false by the cluster manager,
:node_down_complete[152] : then resource groups will be processed via clsetenvgrp
:node_down_complete[154] [[ '' != forced ]]
:node_down_complete[154] [[ TRUE == FALSE ]]
:node_down_complete[184] : For each participating resource group, serially process the resources
:node_down_complete[186] LOCALCOMP=N
:node_down_complete[189] : if RG_DEPENDENCIES is set to false by the cluster manager,
:node_down_complete[190] : then resource groups will be processed via clsetenvgrp
:node_down_complete[192] [[ '' != forced ]]
:node_down_complete[192] [[ TRUE == FALSE ]]
:node_down_complete[232] [[ '' != forced ]]
:node_down_complete[232] [[ epprda == epprda ]]
:node_down_complete[235] : Call ss-unload replicated resource methods if they are defined
:node_down_complete[237] cl_rrmethods2call ss_unload
:cl_rrmethods2call[56] version=%I%
:cl_rrmethods2call[84] RRMETHODS=''
:cl_rrmethods2call[85] NEED_RR_ENV_VARS=no
:cl_rrmethods2call[104] : The load and unload methods if defined are returned on the
:cl_rrmethods2call[105] : local node
:cl_rrmethods2call[107] [[ epprda == epprda ]]
:cl_rrmethods2call[109] NEED_RR_ENV_VARS=yes
:cl_rrmethods2call[129] : Set the '*_REP_RESOURCE' variables if needed.
:cl_rrmethods2call[131] [[ yes == yes ]]
:cl_rrmethods2call[133] cllsres
:cl_rrmethods2call[133] 2> /dev/null
:cl_rrmethods2call[133] eval APPLICATIONS='"epprd_app"' EXPORT_FILESYSTEM='"/board_org"' FILESYSTEM='""' FORCED_VARYON='"false"' FSCHECK_TOOL='"fsck"' FS_BEFORE_IPADDR='"false"' MOUNT_FILESYSTEM='"/board;/board_org"' RECOVERY_METHOD='"sequential"' SERVICE_LABEL='"epprd"' SSA_DISK_FENCING='"false"' VG_AUTO_IMPORT='"false"' VOLUME_GROUP='"datavg"' USERDEFINED_RESOURCES='""'
:cl_rrmethods2call[1] APPLICATIONS=epprd_app
:cl_rrmethods2call[1] EXPORT_FILESYSTEM=/board_org
:cl_rrmethods2call[1] FILESYSTEM=''
:cl_rrmethods2call[1] FORCED_VARYON=false
:cl_rrmethods2call[1] FSCHECK_TOOL=fsck
:cl_rrmethods2call[1] FS_BEFORE_IPADDR=false
:cl_rrmethods2call[1] MOUNT_FILESYSTEM='/board;/board_org'
:cl_rrmethods2call[1] RECOVERY_METHOD=sequential
:cl_rrmethods2call[1] SERVICE_LABEL=epprd
:cl_rrmethods2call[1] SSA_DISK_FENCING=false
:cl_rrmethods2call[1] VG_AUTO_IMPORT=false
:cl_rrmethods2call[1] VOLUME_GROUP=datavg
:cl_rrmethods2call[1] USERDEFINED_RESOURCES=''
:cl_rrmethods2call[137] [[ -n '' ]]
:cl_rrmethods2call[142] [[ -n '' ]]
:cl_rrmethods2call[147] [[ -n '' ]]
:cl_rrmethods2call[152] [[ -n '' ]]
:cl_rrmethods2call[157] [[ -n '' ]]
:cl_rrmethods2call[162] [[ -n '' ]]
:cl_rrmethods2call[167] [[ -n '' ]]
:cl_rrmethods2call[172] [[ -n '' ]]
:cl_rrmethods2call[182] [[ -z '' ]]
:cl_rrmethods2call[184] typeset sysmgdata
:cl_rrmethods2call[185] typeset reposmgdata
:cl_rrmethods2call[186] [[ -x /usr/es/sbin/cluster/xd_generic/xd_cli/clxd_list_mg_smit ]]
:cl_rrmethods2call[191] [[ -n '' ]]
:cl_rrmethods2call[191] [[ -n '' ]]
:cl_rrmethods2call[197] echo ''
:cl_rrmethods2call[199] return 0
:node_down_complete[237] METHODS=''
:node_down_complete[251] : If dependencies are configured and node is being forced down then
:node_down_complete[252] : no need to do varyoff for any passive mode VGs
:node_down_complete[254] [[ TRUE == TRUE ]]
:node_down_complete[257] : If any volume groups were varied on in passive mode when this node
:node_down_complete[258] : came up, all the prior resource group processing would have left them
:node_down_complete[259] : in passive mode. Completely vary them off at this point.
:node_down_complete[261] lsvg -L
:node_down_complete[261] lsvg -L -o
:node_down_complete[261] paste -s '-d|' -
:node_down_complete[261] grep -w -v -x -E 'caavg_private|rootvg'
:node_down_complete[261] INACTIVE_VGS=datavg
:node_down_complete[264] lsvg -L datavg
:node_down_complete[264] 2> /dev/null
:node_down_complete[264] grep -i -q passive-only
:node_down_complete[267] : Reset any read only fence height prior to vary off
:node_down_complete[269] cl_set_vg_fence_height -c datavg rw
cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0))
:node_down_complete[270] RC=0
:node_down_complete[271] (( 0 != 0 ))
:node_down_complete[282] : 'lsvg ' will show if a volume group is varied
:node_down_complete[283] : on in passive mode. Any such are varied off
:node_down_complete[285] cltime
2023-01-28T18:00:29.803254
:node_down_complete[286] varyoffvg datavg
:node_down_complete[287] RC=0
:node_down_complete[288] cltime
2023-01-28T18:00:29.931843
:node_down_complete[289] : rc_varyoffvg = 0
:node_down_complete[291] : Force a timestamp update to get timestamps in sync
:node_down_complete[292] : since timing may prevent LVM from doing so
:node_down_complete[294] cl_update_vg_odm_ts -o -f datavg
:cl_update_vg_odm_ts(0.000)[77] version=1.13
:cl_update_vg_odm_ts(0.000)[121] o_flag=''
:cl_update_vg_odm_ts(0.000)[122] f_flag=''
:cl_update_vg_odm_ts(0.000)[123] getopts :of option
:cl_update_vg_odm_ts(0.000)[126] : Local timestamps should be good, since volume group was
:cl_update_vg_odm_ts(0.001)[127] : just varyied on or off
:cl_update_vg_odm_ts(0.001)[128] o_flag=TRUE
:cl_update_vg_odm_ts(0.001)[123] getopts :of option
:cl_update_vg_odm_ts(0.001)[131] : Update timestamps clusterwide, even if LVM support is in
:cl_update_vg_odm_ts(0.001)[132] : place
:cl_update_vg_odm_ts(0.001)[133] f_flag=TRUE
:cl_update_vg_odm_ts(0.001)[123] getopts :of option
:cl_update_vg_odm_ts(0.001)[142] shift 2
:cl_update_vg_odm_ts(0.001)[144] vg_name=datavg
:cl_update_vg_odm_ts(0.001)[145] [[ -z datavg ]]
:cl_update_vg_odm_ts(0.001)[151] shift
:cl_update_vg_odm_ts(0.001)[152] node_list=''
:cl_update_vg_odm_ts(0.001)[153] /usr/es/sbin/cluster/utilities/cl_get_path all
:cl_update_vg_odm_ts(0.004)[153] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin
:cl_update_vg_odm_ts(0.004)[155] [[ -z TRUE ]]
:cl_update_vg_odm_ts(0.004)[214] found_new_ts=''
:cl_update_vg_odm_ts(0.004)[217] : Try to update the volume group ODM time stamp on every other node
:cl_update_vg_odm_ts(0.004)[218] : in the resource group that owns datavg
:cl_update_vg_odm_ts(0.004)[220] [[ -z '' ]]
:cl_update_vg_odm_ts(0.004)[223] : We were not given a node list. The node list is derived from
:cl_update_vg_odm_ts(0.004)[224] : the resource group that the volume group is in.
:cl_update_vg_odm_ts(0.004)[226] /usr/es/sbin/cluster/utilities/clodmget -q 'name like *VOLUME_GROUP and value = datavg' -f group -n HACMPresource
:cl_update_vg_odm_ts(0.007)[226] group_name=epprd_rg
:cl_update_vg_odm_ts(0.007)[227] [[ -n epprd_rg ]]
:cl_update_vg_odm_ts(0.007)[230] : Find all other cluster nodes in the resource group that owns
:cl_update_vg_odm_ts(0.007)[231] : the volume group datavg
:cl_update_vg_odm_ts(0.007)[233] /usr/es/sbin/cluster/utilities/clodmget -q 'group = epprd_rg' -f nodes -n HACMPgroup
:cl_update_vg_odm_ts(0.009)[233] node_list='epprda epprds'
:cl_update_vg_odm_ts(0.009)[238] : Check to see if the volume group is known locally
:cl_update_vg_odm_ts(0.009)[240] odmget -q 'name = datavg and PdDvLn = logical_volume/vgsubclass/vgtype' CuDv
:cl_update_vg_odm_ts(0.011)[240] [[ -z $'\nCuDv:\n\tname = "datavg"\n\tstatus = 1\n\tchgstatus = 1\n\tddins = ""\n\tlocation = ""\n\tparent = ""\n\tconnwhere = ""\n\tPdDvLn = "logical_volume/vgsubclass/vgtype"' ]]
:cl_update_vg_odm_ts(0.011)[272] : Get the vgid for volume group datavg
:cl_update_vg_odm_ts(0.012)[274] getlvodm -v datavg
:cl_update_vg_odm_ts(0.014)[274] vgid=00c44af100004b00000001851e9dc053
:cl_update_vg_odm_ts(0.014)[280] : Get the volume group timestamp for datavg
:cl_update_vg_odm_ts(0.014)[281] : as currently saved in ODM
:cl_update_vg_odm_ts(0.014)[283] getlvodm -T 00c44af100004b00000001851e9dc053
:cl_update_vg_odm_ts(0.016)[283] current_odm_ts=63d4e41f29287594
:cl_update_vg_odm_ts(0.017)[288] [[ TRUE != TRUE ]]
:cl_update_vg_odm_ts(0.017)[346] : Is an update 'necessary?'
:cl_update_vg_odm_ts(0.017)[348] [[ -n 'epprda epprds' ]]
:cl_update_vg_odm_ts(0.017)[350] LOCALNODENAME=epprda
:cl_update_vg_odm_ts(0.017)[351] LOCALNODENAME=epprda
:cl_update_vg_odm_ts(0.017)[352] [[ -n epprda ]]
:cl_update_vg_odm_ts(0.017)[355] : Skip the local node, since we have done that above.
:cl_update_vg_odm_ts(0.018)[357] print 'epprda epprds'
:cl_update_vg_odm_ts(0.020)[357] tr ' ' '\n'
:cl_update_vg_odm_ts(0.021)[357] tr , '\n'
:cl_update_vg_odm_ts(0.023)[357] grep -v -w -x epprda
:cl_update_vg_odm_ts(0.024)[357] paste -s -d, -
:cl_update_vg_odm_ts(0.026)[357] node_list=epprds
:cl_update_vg_odm_ts(0.027)[365] : Update the time stamp on all those other nodes on which the
:cl_update_vg_odm_ts(0.027)[366] : volume group is currently varied off. LVM will take care of
:cl_update_vg_odm_ts(0.027)[367] : the others.
:cl_update_vg_odm_ts(0.027)[369] [[ -n epprds ]]
:cl_update_vg_odm_ts(0.027)[371] cl_on_node -cspoc '-f -n epprds' 'lsvg -o | grep -qx datavg || /usr/sbin/putlvodm -T 63d4e41f29287594 00c44af100004b00000001851e9dc053 && /usr/sbin/savebase > /dev/null'
:cl_update_vg_odm_ts(0.027)[371] _CSPOC_CALLED_FROM_SMIT=true
clhaver[576]: version 1.14
clhaver[591]: colon delimied output
clhaver[612]: MINVER=6100
clhaver[624]: thread(epprds)
clhaver[144]: cl_gethostbynode epprds
cl_gethostbynode[102]: version 1.1 i_flag=0 given name is epprds
cl_gethostbynode[127]: cl_query nodes=2
cl_gethostbynode[161]: epprds is a PowerHA node name
cl_gethostbynode[313]: epprds is the CAA host matching PowerHA node epprds
clhaver[157]: node epprds resolves to epprds
clhaver[166]: cl_socket(COLLVER epprds epprds)
clhaver[191]: cl_connect(epprds)
clhaver[230]: read(epprds)
epprds: :cl_rsh[99] version=1.4
epprds: :cl_rsh[102] CAA_node_name=''
epprds: :cl_rsh[105] : Process optional flags
epprds: :cl_rsh[107] cmd_flag=-n
epprds: :cl_rsh[108] [[ -n == -n ]]
epprds: :cl_rsh[111] : Remove the no standard input flag
epprds: :cl_rsh[113] shift
epprds: :cl_rsh[124] : Pick up and check the input
epprds: :cl_rsh[126] print 'epprds /usr/es/sbin/cluster/cspoc/cexec eval gmhdhgghcacngpcahmcaghhcgfhacacnhbhicagegbhegbhgghcahmhmcacphfhdhccphdgcgjgocphahfhegmhggpgegncacnfecadgddgedegfdedbggdcdjdcdidhdfdjdecadadagddedegbggdbdadadadadegcdadadadadadadadbdidfdbgfdjgegddadfddcacgcgcacphfhdhccphdgcgjgocphdgbhggfgcgbhdgfcadocacpgegfhgcpgohfgmgm'
epprds: :cl_rsh[126] read destination command
epprds: :cl_rsh[127] [[ -z epprds ]]
epprds: :cl_rsh[127] [[ -z '/usr/es/sbin/cluster/cspoc/cexec eval gmhdhgghcacngpcahmcaghhcgfhacacnhbhicagegbhegbhgghcahmhmcacphfhdhccphdgcgjgocphahfhegmhggpgegncacnfecadgddgedegfdedbggdcdjdcdidhdfdjdecadadagddedegbggdbdadadadadegcdadadadadadadadbdidfdbgfdjgegddadfddcacgcgcacphfhdhccphdgcgjgocphdgbhggfgcgbhdgfcadocacpgegfhgcpgohfgmgm' ]]
epprds: :cl_rsh[136] /usr/es/sbin/cluster/utilities/cl_nn2hn epprds
epprds: :cl_nn2hn[83] version=1.11
epprds: :cl_nn2hn[86] CAA_host_name=''
epprds: :cl_nn2hn[86] typeset CAA_host_name
epprds: :cl_nn2hn[87] node_name=''
epprds: :cl_nn2hn[87] typeset node_name
epprds: :cl_nn2hn[88] node_interfaces=''
epprds: :cl_nn2hn[88] typeset node_interfaces
epprds: :cl_nn2hn[89] COMM_PATH=''
epprds: :cl_nn2hn[89] typeset COMM_PATH
epprds: :cl_nn2hn[90] r_flag=''
epprds: :cl_nn2hn[90] typeset r_flag
epprds: :cl_nn2hn[93] : Pick up and check the input
epprds: :cl_nn2hn[95] getopts r option
epprds: :cl_nn2hn[106] : Pick up the destination, which follows the options
epprds: :cl_nn2hn[108] shift 0
epprds: :cl_nn2hn[109] destination=epprds
epprds: :cl_nn2hn[109] typeset destination
epprds: :cl_nn2hn[111] [[ -z epprds ]]
epprds: :cl_nn2hn[121] : In order to prevent recursion, first you must prevent recursion...
epprds: :cl_nn2hn[123] [[ '' != TRUE ]]
epprds: :cl_nn2hn[126] : This routine is not being called from cl_query_hn_id, so call it
epprds: :cl_nn2hn[127] : to see if it can find the CAA host name based on a common short
epprds: :cl_nn2hn[128] : id, or match on CAA host name, or match on CAA short name, or
epprds: :cl_nn2hn[129] : similar match in /etc/cluster/rhosts.
epprds: :cl_nn2hn[131] cl_query_hn_id -q -i epprds
epprds: cl_query_hn_id[137]: version 1.2
epprds: cl_gethostbynode[102]: version 1.1 i_flag=105 given name is epprds
epprds: cl_gethostbynode[127]: cl_query nodes=2
epprds: cl_gethostbynode[161]: epprds is a PowerHA node name
epprds: cl_gethostbynode[313]: epprds is the CAA host matching PowerHA node epprds
epprds: :cl_nn2hn[131] CAA_host_name=epprds
epprds: :cl_nn2hn[132] RC=0
epprds: :cl_nn2hn[133] (( 0 == 0 ))
epprds: :cl_nn2hn[136] : The straight forward tests worked!
epprds: :cl_nn2hn[138] [[ epprds == @(+([0-9.])|+([0-9:])) ]]
epprds: :cl_nn2hn[159] [[ -z epprds ]]
epprds: :cl_nn2hn[340] [[ -z epprds ]]
epprds: :cl_nn2hn[345] [[ -n epprds ]]
epprds: :cl_nn2hn[348] : We have found epprds is our best guess at a CAA host name
epprds: :cl_nn2hn[349] : corresponding to epprds
epprds: :cl_nn2hn[351] print epprds
epprds: :cl_nn2hn[352] return 0
epprds: :cl_rsh[136] CAA_node_name=epprds
epprds: :cl_rsh[148] : Invoke clcomd
epprds: :cl_rsh[150] /usr/sbin/clrsh epprds -n '/usr/es/sbin/cluster/cspoc/cexec eval gmhdhgghcacngpcahmcaghhcgfhacacnhbhicagegbhegbhgghcahmhmcacphfhdhccphdgcgjgocphahfhegmhggpgegncacnfecadgddgedegfdedbggdcdjdcdidhdfdjdecadadagddedegbggdbdadadadadegcdadadadadadadadbdidfdbgfdjgegddadfddcacgcgcacphfhdhccphdgcgjgocphdgbhggfgcgbhdgfcadocacpgegfhgcpgohfgmgm'
epprds: :cl_rsh[151] return 0
:cl_update_vg_odm_ts(0.504)[375] return 0
:node_down_complete[297] : If VG fencing is in place, restore the fence height to read/only.
:node_down_complete[299] cl_set_vg_fence_height -c datavg ro
cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=ro(2))
:node_down_complete[300] RC=0
:node_down_complete[301] : return code from volume group fencing is 0
:node_down_complete[302] (( 0 != 0 ))
:node_down_complete[315] : remove the flag file used to indicate reconfig_resources
:node_down_complete[317] rm -f /usr/es/sbin/cluster/etc/.hacmp_wlm_config_changed
:node_down_complete[320] : Run WLM stop script
:node_down_complete[322] cl_wlm_stop
:cl_wlm_stop[+55] version=%I%
:cl_wlm_stop[+59] :cl_wlm_stop[+59] clwlmruntime -l
:cl_wlm_stop[+59] awk BEGIN { FS = ":" } $1 !~ /^#.*/ { print $1 }
HA_WLM_CONFIG=HA_WLM_config
:cl_wlm_stop[+60] [[ -z HA_WLM_config ]]
:cl_wlm_stop[+69] wlmcntrl -q
WLM is stopped
:cl_wlm_stop[+70] WLM_IS_RUNNING=1
:cl_wlm_stop[+72] WLM_CONFIG_FILES=classes shares limits rules
:cl_wlm_stop[+74] PREV_WLM_CONFIG=
:cl_wlm_stop[+76] HA_STARTED_WLM=false
:cl_wlm_stop[+78] [[ -e /etc/wlm/HA_WLM_config/HA_prev_config_subdir ]]
:cl_wlm_stop[+86] [[ -e /etc/wlm/HA_WLM_config/classes.prev ]]
:cl_wlm_stop[+86] [[ -e /etc/wlm/HA_WLM_config/shares.prev ]]
:cl_wlm_stop[+86] [[ -e /etc/wlm/HA_WLM_config/limits.prev ]]
:cl_wlm_stop[+86] [[ -e /etc/wlm/HA_WLM_config/rules.prev ]]
:cl_wlm_stop[+107] [[ -n ]]
:cl_wlm_stop[+107] [[ true = false ]]
:cl_wlm_stop[+144] exit 0
:node_down_complete[330] [[ epprda == epprda ]]
:node_down_complete[333] : Node is down: Create the lock file that inhibits node halt
:node_down_complete[335] /bin/touch /usr/es/sbin/cluster/etc/ha_nodehalt.lock
:node_down_complete[339] : If this is the last node to leave, restore read write access to all volume groups
:node_down_complete[341] [[ '' != forced ]]
:node_down_complete[343] [[ -z epprds ]]
:node_down_complete[392] [[ epprda == epprda ]]
:node_down_complete[395] : Node is gracefully going down.
:node_down_complete[397] clodmget -n -q policy=scsi -f value HACMPsplitmerge
:node_down_complete[397] SCSIPR_ENABLED=''
:node_down_complete[397] typeset SCSIPR_ENABLED
:node_down_complete[398] [[ '' == Yes ]]
:node_down_complete[452] : refresh clcomd, FWIW
:node_down_complete[454] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
:node_down_complete[459] : This is the final info of all RGs:
:node_down_complete[461] clRGinfo -p -t
:node_down_complete[461] 2>& 1
clRGinfo[431]: version I
clRGinfo[517]: Number of resource groups = 0
clRGinfo[562]: cluster epprda_cluster is version = 22
clRGinfo[597]: no resource groups specified on command line - print all
clRGinfo[685]: Current group is 'epprd_rg'
get primary state info for state 4
get secondary state info for state 4
getPrimaryStateStr: using primary_table => primary_state_table
get primary state info for state 4
get secondary state info for state 4
getPrimaryStateStr: using primary_table => primary_state_table
Cluster Name: epprda_cluster
Resource Group Name: epprd_rg
Node Group State Delayed
Timers
---------------------------------------------------------------- --------------- -------------------
epprda OFFLINE
epprds OFFLINE
:node_down_complete[463] return 0
Jan 28 2023 18:00:30 EVENT COMPLETED: node_down_complete epprda 0
|2023-01-28T18:00:30|22170|EVENT COMPLETED: node_down_complete epprda 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:00:30.579460
+ echo '|2023-01-28T18:00:30.579460|INFO: node_down_complete|epprda|0'
+ 1>> /var/hacmp/availability/clavailability.log
clexit.rc : Normal termination of clstrmgrES. Restart now.
0513-059 The clstrmgrES Subsystem has been started. Subsystem PID is 26607896.
Jan 28 2023 18:03:26 EVENT START: admin_op clrm_start_request 8559 0
|2023-01-28T18:03:26|8559|EVENT START: admin_op clrm_start_request 8559 0|
:admin_op[110] trap sigint_handler INT
:admin_op[116] OP_TYPE=clrm_start_request
:admin_op[116] typeset OP_TYPE
:admin_op[117] SERIAL=8559
:admin_op[117] typeset -li SERIAL
:admin_op[118] INVALID=0
:admin_op[118] typeset -li INVALID
The administrator initiated the following action at Sat Jan 28 18:03:26 KORST 2023
Check smit.log and clutils.log for additional details.
Starting PowerHA cluster services on node: epprda in normal mode...
Jan 28 2023 18:03:29 EVENT COMPLETED: admin_op clrm_start_request 8559 0 0
|2023-01-28T18:03:29|8559|EVENT COMPLETED: admin_op clrm_start_request 8559 0 0|
PowerHA SystemMirror Event Preamble
----------------------------------------------------------------------------
Serial number for this event: 8560
Cluster services started on node 'epprda'
Enqueued rg_move acquire event for resource group epprd_rg.
Node Up Completion Event has been enqueued.
----------------------------------------------------------------------------
|EVENT_PREAMBLE_START|TE_JOIN_NODE_DEP|2023-01-28T18:03:31|8560|
|CLUSTER_RG_MOVE_ACQUIRE|epprd_rg|
|NODE_UP_COMPLETE|
|EVENT_PREAMBLE_END|
Jan 28 2023 18:03:34 EVENT START: node_up epprda
|2023-01-28T18:03:34|8560|EVENT START: node_up epprda|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:03:34.209427
+ echo '|2023-01-28T18:03:34.209427|INFO: node_up|epprda'
+ 1>> /var/hacmp/availability/clavailability.log
:node_up[182] version=%I%
:node_up[185] NODENAME=epprda
:node_up[185] export NODENAME
:node_up[193] STATUS=0
:node_up[193] typeset -li STATUS
:node_up[194] RC=0
:node_up[194] typeset -li RC
:node_up[195] ENABLE_NFS_CROSS_MOUNT=false
:node_up[196] START_MODE=''
:node_up[196] typeset START_MODE
:node_up[198] set -u
:node_up[200] (( 1 < 1 ))
:node_up[200] (( 1 > 2 ))
:node_up[207] : serial number for this event is 8560
:node_up[210] [[ epprda == epprda ]]
:node_up[213] : Remove the node halt lock file.
:node_up[214] : Hereafter, clstrmgr failure leads to node halt
:node_up[216] rm -f /usr/es/sbin/cluster/etc/ha_nodehalt.lock
:node_up[219] (( 1 > 1 ))
:node_up[256] : If RG_DEPENDENCIES=false, process RGs with clsetenvgrp
:node_up[258] [[ TRUE == FALSE ]]
:node_up[281] : localnode processing prior to RG acquisition
:node_up[283] [[ epprda == epprda ]]
:node_up[283] [[ '' != forced ]]
:node_up[286] : Reserve Volume Groups using SCSIPR
:node_up[288] clodmget -n -q policy=scsi -f value HACMPsplitmerge
:node_up[288] SCSIPR_ENABLED=''
:node_up[288] typeset SCSIPR_ENABLED
:node_up[289] [[ '' == Yes ]]
:node_up[334] : Setup VG fencing. This must be done prior to any potential disk access.
:node_up[336] node_up_vg_fence_init
:node_up[node_up_vg_fence_init:73] typeset VGs_on_line
:node_up[node_up_vg_fence_init:74] typeset VG_name
:node_up[node_up_vg_fence_init:75] typeset VG_ID
:node_up[node_up_vg_fence_init:76] typeset VG_PV_list
:node_up[node_up_vg_fence_init:79] : Find out what volume groups are currently on-line
:node_up[node_up_vg_fence_init:81] lsvg -L -o
:node_up[node_up_vg_fence_init:81] 2> /var/hacmp/log/node_up.lsvg.err
:node_up[node_up_vg_fence_init:81] print caavg_private rootvg
:node_up[node_up_vg_fence_init:81] VGs_on_line='caavg_private rootvg'
:node_up[node_up_vg_fence_init:82] [[ -e /var/hacmp/log/node_up.lsvg.err ]]
:node_up[node_up_vg_fence_init:82] [[ ! -s /var/hacmp/log/node_up.lsvg.err ]]
:node_up[node_up_vg_fence_init:82] rm /var/hacmp/log/node_up.lsvg.err
:node_up[node_up_vg_fence_init:85] : Clean up any old fence group files and stale fence groups.
:node_up[node_up_vg_fence_init:86] : These are all of the form '/usr/es/sbin/cluster/etc/vg/.uud'
:node_up[node_up_vg_fence_init:88] valid_vg_lst=''
:node_up[node_up_vg_fence_init:89] lsvg -L
:node_up[node_up_vg_fence_init:89] egrep -vw 'rootvg|caavg_private'
:node_up[node_up_vg_fence_init:89] 2>> /var/hacmp/log/node_up.lsvg.err
:node_up:datavg[node_up_vg_fence_init:91] PS4_LOOP=datavg
:node_up:datavg[node_up_vg_fence_init:92] clodmget -q $'name like \'*VOLUME_GROUP\' and value = datavg' -f value -n HACMPresource
:node_up:datavg[node_up_vg_fence_init:92] [[ -z datavg ]]
:node_up:datavg[node_up_vg_fence_init:109] : Volume group datavg is an HACMP resource
:node_up:datavg[node_up_vg_fence_init:111] [[ 'caavg_private rootvg' == ?(*\ )datavg?(\ *) ]]
:node_up:datavg[node_up_vg_fence_init:115] fence_height=ro
:node_up:datavg[node_up_vg_fence_init:119] : Recreate the fence group to match current volume group membership
:node_up:datavg[node_up_vg_fence_init:121] cl_vg_fence_redo -c datavg ro
:cl_vg_fence_redo[52] version=1.3
:cl_vg_fence_redo[55] RC=0
:cl_vg_fence_redo[55] typeset -li RC
:cl_vg_fence_redo[58] : Check for optional -c parameter
:cl_vg_fence_redo[60] [[ -c == -c ]]
:cl_vg_fence_redo[62] c_flag=-c
:cl_vg_fence_redo[63] shift
:cl_vg_fence_redo[66] VG=datavg
:cl_vg_fence_redo[67] UUID_file=/usr/es/sbin/cluster/etc/vg/datavg.uuid
:cl_vg_fence_redo[68] fence_height=ro
:cl_vg_fence_redo[70] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.uuid ]]
:cl_vg_fence_redo[83] [[ -z ro ]]
:cl_vg_fence_redo[98] : Rebuild the fence group for datavg
:cl_vg_fence_redo[99] : First, find the disks in the volume group
:cl_vg_fence_redo[101] /usr/sbin/getlvodm -v datavg
:cl_vg_fence_redo[101] VGID=00c44af100004b00000001851e9dc053
:cl_vg_fence_redo[103] [[ -n 00c44af100004b00000001851e9dc053 ]]
:cl_vg_fence_redo[106] : Create a fence group for datavg
:cl_vg_fence_redo[108] /usr/sbin/getlvodm -w 00c44af100004b00000001851e9dc053
:cl_vg_fence_redo[108] cut -f2 '-d '
:cl_vg_fence_redo[108] PV_disk_list=$'hdisk2\nhdisk3\nhdisk4\nhdisk5\nhdisk6\nhdisk7\nhdisk8'
:cl_vg_fence_redo[109] cl_vg_fence_init -c datavg ro hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8
cl_vg_fence_init[145]: version @(#) 7d4c34b 43haes/usr/sbin/cluster/events/utils/cl_vg_fence_init.c, 726, 2147A_aha726, Feb 05 2021 09:50 PM
cl_vg_fence_init[204]: odm_initialize()
cl_vg_fence_init[231]: calloc(7, 64)
cl_vg_fence_init[259]: getattr(hdisk2, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk3, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk4, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk5, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk6, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk7, PCM) = PCM/friend/fcpother
cl_vg_fence_init[259]: getattr(hdisk8, PCM) = PCM/friend/fcpother
cl_vg_fence_init[294]: sfwAddFenceGroup(datavg, 7, hdisk2, hdisk3, hdisk4, hdisk5, hdisk6, hdisk7, hdisk8)
cl_vg_fence_init[374]: free(200101b8)
cl_vg_fence_init[400]: creat(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_vg_fence_init[408]: write(/usr/es/sbin/cluster/etc/vg/datavg.uuid, 16)
cl_g_fence_init[442]: sfwSetFenceGroup(vg=datavg, height=ro(2) uuid=ec2db4422261eae02091227fb9e53c88):cl_vg_fence_redo[110] RC=0
:cl_vg_fence_redo[111] : Exit status is 0 from cl_vg_fence_init datavg ro hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8
:cl_vg_fence_redo[113] (( 0 != 0 ))
:cl_vg_fence_redo[123] return 0
:node_up:datavg[node_up_vg_fence_init:122] valid_vg_lst=' datavg'
:node_up:datavg[node_up_vg_fence_init:125] [[ -e /var/hacmp/log/node_up.lsvg.err ]]
:node_up:datavg[node_up_vg_fence_init:125] [[ ! -s /var/hacmp/log/node_up.lsvg.err ]]
:node_up:datavg[node_up_vg_fence_init:125] rm /var/hacmp/log/node_up.lsvg.err
:node_up:datavg[node_up_vg_fence_init:128] : Any remaining old fence group files are from stale fence groups,
:node_up:datavg[node_up_vg_fence_init:129] : so remove them
:node_up:datavg[node_up_vg_fence_init:131] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.uuid ]]
:node_up:datavg[node_up_vg_fence_init:133] ls /usr/es/sbin/cluster/etc/vg/datavg.uuid
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:135] PS4_LOOP=/usr/es/sbin/cluster/etc/vg/datavg.uuid
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:136] VG_name=datavg.uuid
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:137] VG_name=datavg
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:138] [[ ' datavg' == ?(*\ )datavg?(\ *) ]]
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:141] : Just redid the fence group for datavg
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:143] continue
:node_up:/usr/es/sbin/cluster/etc/vg/datavg.uuid[node_up_vg_fence_init:158] unset PS4_LOOP
:node_up[node_up_vg_fence_init:160] return 0
:node_up[344] : If WLM manager clases have been configured for an application server, process them now
:node_up[346] clodmget -q $'name like \'WLM_*\'' -f id HACMPresource
:node_up[346] [[ -n '' ]]
:node_up[371] : Call ss-load replicated resource methods if they are defined
:node_up[373] cl_rrmethods2call ss_load
:cl_rrmethods2call[56] version=%I%
:cl_rrmethods2call[84] RRMETHODS=''
:cl_rrmethods2call[85] NEED_RR_ENV_VARS=no
:cl_rrmethods2call[104] : The load and unload methods if defined are returned on the
:cl_rrmethods2call[105] : local node
:cl_rrmethods2call[107] [[ epprda == epprda ]]
:cl_rrmethods2call[109] NEED_RR_ENV_VARS=yes
:cl_rrmethods2call[129] : Set the '*_REP_RESOURCE' variables if needed.
:cl_rrmethods2call[131] [[ yes == yes ]]
:cl_rrmethods2call[133] cllsres
:cl_rrmethods2call[133] 2> /dev/null
:cl_rrmethods2call[133] eval APPLICATIONS='"epprd_app"' EXPORT_FILESYSTEM='"/board_org' '/sapmnt/EPP"' FILESYSTEM='""' FORCED_VARYON='"false"' FSCHECK_TOOL='"fsck"' FS_BEFORE_IPADDR='"false"' MOUNT_FILESYSTEM='"/board;/board_org"' RECOVERY_METHOD='"sequential"' SERVICE_LABEL='"epprd"' SSA_DISK_FENCING='"false"' VG_AUTO_IMPORT='"false"' VOLUME_GROUP='"datavg"' USERDEFINED_RESOURCES='""'
:cl_rrmethods2call[1] APPLICATIONS=epprd_app
:cl_rrmethods2call[1] EXPORT_FILESYSTEM='/board_org /sapmnt/EPP'
:cl_rrmethods2call[1] FILESYSTEM=''
:cl_rrmethods2call[1] FORCED_VARYON=false
:cl_rrmethods2call[1] FSCHECK_TOOL=fsck
:cl_rrmethods2call[1] FS_BEFORE_IPADDR=false
:cl_rrmethods2call[1] MOUNT_FILESYSTEM='/board;/board_org'
:cl_rrmethods2call[1] RECOVERY_METHOD=sequential
:cl_rrmethods2call[1] SERVICE_LABEL=epprd
:cl_rrmethods2call[1] SSA_DISK_FENCING=false
:cl_rrmethods2call[1] VG_AUTO_IMPORT=false
:cl_rrmethods2call[1] VOLUME_GROUP=datavg
:cl_rrmethods2call[1] USERDEFINED_RESOURCES=''
:cl_rrmethods2call[137] [[ -n '' ]]
:cl_rrmethods2call[142] [[ -n '' ]]
:cl_rrmethods2call[147] [[ -n '' ]]
:cl_rrmethods2call[152] [[ -n '' ]]
:cl_rrmethods2call[157] [[ -n '' ]]
:cl_rrmethods2call[162] [[ -n '' ]]
:cl_rrmethods2call[167] [[ -n '' ]]
:cl_rrmethods2call[172] [[ -n '' ]]
:cl_rrmethods2call[182] [[ -z '' ]]
:cl_rrmethods2call[184] typeset sysmgdata
:cl_rrmethods2call[185] typeset reposmgdata
:cl_rrmethods2call[186] [[ -x /usr/es/sbin/cluster/xd_generic/xd_cli/clxd_list_mg_smit ]]
:cl_rrmethods2call[191] [[ -n '' ]]
:cl_rrmethods2call[191] [[ -n '' ]]
:cl_rrmethods2call[197] echo ''
:cl_rrmethods2call[199] return 0
:node_up[373] METHODS=''
:node_up[387] : When the local node is brought up, reset the resource locator info.
:node_up[390] clchdaemons -r -d clstrmgr_scripts -t resource_locator
:node_up[397] [[ '' != manual ]]
:node_up[400] : attempt passive varyon for any ECM VGs in serial RGs
:node_up[405] cl_pvo
:cl_pvo[590] version=1.34.2.12
:cl_pvo(0.007)[592] PS4_TIMER=true
:cl_pvo(0.007)[594] rc=0
:cl_pvo(0.007)[594] typeset -li rc
:cl_pvo(0.007)[595] mode=0
:cl_pvo(0.007)[595] typeset -li mode
:cl_pvo(0.007)[600] ENODEV=19
:cl_pvo(0.008)[600] typeset -li ENODEV
:cl_pvo(0.008)[601] vg_force_on_flag=''
:cl_pvo(0.008)[605] : Pick up any passed options
:cl_pvo(0.008)[607] rg_list=''
:cl_pvo(0.008)[607] export rg_list
:cl_pvo(0.008)[608] vg_list=''
:cl_pvo(0.008)[609] fs_list=''
:cl_pvo(0.008)[610] all_vgs_flag=''
:cl_pvo(0.008)[611] [[ -z '' ]]
:cl_pvo(0.008)[613] all_vgs_flag=true
:cl_pvo(0.008)[615] getopts :g:v:f: option
:cl_pvo(0.008)[629] shift 0
:cl_pvo(0.008)[630] [[ -n '' ]]
:cl_pvo(0.008)[645] O_flag=''
:cl_pvo(0.008)[646] odmget -q 'attribute = varyon_state' PdAt
:cl_pvo(0.010)[646] [[ -n $'\nPdAt:\n\tuniquetype = "logical_volume/vgsubclass/vgtype"\n\tattribute = "varyon_state"\n\tdeflt = "0"\n\tvalues = "0,1,2,3"\n\twidth = ""\n\ttype = "R"\n\tgeneric = ""\n\trep = "l"\n\tnls_index = 0' ]]
:cl_pvo(0.010)[649] : LVM may record that a volume group was varied on from an earlier
:cl_pvo(0.010)[650] : IPL. Rely on HA state tracking, and override the LVM check
:cl_pvo(0.010)[652] O_flag=-O
:cl_pvo(0.010)[655] [[ -n true ]]
:cl_pvo(0.010)[657] [[ -z epprda ]]
:cl_pvo(0.010)[661] [[ -z epprda ]]
:cl_pvo(0.010)[672] : Since no resource names of any type were explicitly passed, go
:cl_pvo(0.010)[673] : find all the resource groups this node is a member of.
:cl_pvo(0.012)[675] clodmget -f group,nodes HACMPgroup
:cl_pvo(0.015)[675] egrep '[: ]epprda( |$)'
:cl_pvo(0.016)[675] cut -f1 -d:
:cl_pvo(0.019)[675] rg_list=epprd_rg
:cl_pvo(0.019)[676] [[ -z epprd_rg ]]
:cl_pvo(0.019)[686] [[ -z '' ]]
:cl_pvo(0.019)[686] [[ -n epprd_rg ]]
:cl_pvo(0.019)[689] : Since no volume groups were passed, go find all the volume groups
:cl_pvo(0.019)[690] : in the given/extracted list of resource groups.
:cl_pvo(0.019)[695] : For each resource group that this node participates in, get the
:cl_pvo(0.019)[696] : list of serial access volume groups in that resource group.
:cl_pvo(0.019)[698] clodmget -q 'group = epprd_rg and name = VOLUME_GROUP' -f value -n HACMPresource
:cl_pvo(0.022)[698] rg_vg_list=datavg
:cl_pvo(0.022)[700] [[ -n datavg ]]
:cl_pvo(0.022)[702] [[ -n true ]]
:cl_pvo(0.022)[703] odmget -q $'group = epprd_rg and name like \'*REP_RESOURCE\'' HACMPresource
:cl_pvo(0.024)[703] [[ -n '' ]]
:cl_pvo(0.024)[739] : If there were any serial access volume groups for this node and
:cl_pvo(0.024)[740] : that resource group, add them to the list.
:cl_pvo(0.024)[742] vg_list=datavg
:cl_pvo(0.024)[747] [[ -z '' ]]
:cl_pvo(0.024)[747] [[ -n epprd_rg ]]
:cl_pvo(0.024)[750] : Since no file systems were passed, go find all the file systems in
:cl_pvo(0.024)[751] : the given/extracted list of resource groups.
:cl_pvo(0.024)[755] : For each resource group that this node participates in, get the
:cl_pvo(0.024)[756] : list of file systems in that resource group.
:cl_pvo(0.024)[761] clodmget -q 'group = epprd_rg and name = FILESYSTEM' -f value -n HACMPresource
:cl_pvo(0.027)[761] rg_fs_list=ALL
:cl_pvo(0.027)[763] [[ -n ALL ]]
:cl_pvo(0.027)[765] [[ -n true ]]
:cl_pvo(0.027)[766] odmget -q $'group = epprd_rg and name like \'*REP_RESOURCE\'' HACMPresource
:cl_pvo(0.029)[766] [[ -n '' ]]
:cl_pvo(0.029)[780] : If there were any file systems for this node and that resource
:cl_pvo(0.029)[781] : group, add them to the list
:cl_pvo(0.029)[783] fs_list=ALL
:cl_pvo(0.029)[790] [[ ALL == ALL ]]
:cl_pvo(0.029)[792] continue
:cl_pvo(0.029)[801] : Remove any duplicates from the volume group list
:cl_pvo(0.031)[803] echo datavg
:cl_pvo(0.033)[803] tr ' ' '\n'
:cl_pvo(0.034)[803] sort -u
:cl_pvo(0.038)[803] vg_list=datavg
:cl_pvo(0.038)[805] [[ -z datavg ]]
:cl_pvo(0.038)[814] : Find out what volume groups are currently on-line
:cl_pvo(0.039)[816] lsvg -L -o
:cl_pvo(0.039)[816] 2> /tmp/lsvg.err
:cl_pvo(0.042)[816] print caavg_private rootvg
:cl_pvo(0.042)[816] ON_LIST='caavg_private rootvg'
:cl_pvo(0.042)[819] : If this node is the first node up in the cluster,
:cl_pvo(0.042)[820] : we want to do a sync for each of the volume groups
:cl_pvo(0.042)[821] : we bring on-line. If multiple cluster nodes are already active, the
:cl_pvo(0.042)[822] : sync is unnecesary, having been done once, and possibly disruptive.
:cl_pvo(0.042)[824] [[ -n '' ]]
:cl_pvo(0.042)[833] : No other cluster nodes are present, default to sync just to be sure
:cl_pvo(0.042)[834] : the volume group is in a good state
:cl_pvo(0.042)[836] syncflag=''
:cl_pvo(0.042)[840] : Now, process each volume group in the list of those this node acceses.
:cl_pvo(0.042):datavg[844] PS4_LOOP=datavg
:cl_pvo(0.042):datavg[844] typeset PS4_LOOP
:cl_pvo(0.042):datavg[846] : Skip any concurrent GMVGs, they should never be pvo.
:cl_pvo(0.042):datavg[848] odmget -q name='GMVG_REP_RESOURCE AND value=datavg' HACMPresource
:cl_pvo(0.045):datavg[848] [[ -n '' ]]
:cl_pvo(0.045):datavg[853] : The VGID is what the LVM low level commands used below use to
:cl_pvo(0.045):datavg[854] : identify the volume group.
:cl_pvo(0.045):datavg[856] /usr/sbin/getlvodm -v datavg
:cl_pvo(0.047):datavg[856] vgid=00c44af100004b00000001851e9dc053
:cl_pvo(0.047):datavg[860] mode=99
:cl_pvo(0.047):datavg[863] : Attempt to determine the mode of the volume group - is it an
:cl_pvo(0.047):datavg[864] : enhanced concurrent mode volume group or not.
:cl_pvo(0.047):datavg[868] export mode
:cl_pvo(0.047):datavg[869] hdisklist=''
:cl_pvo(0.048):datavg[870] /usr/sbin/getlvodm -w 00c44af100004b00000001851e9dc053
:cl_pvo(0.050):datavg[870] read pvid hdisk
:cl_pvo(0.050):datavg[871] hdisklist=hdisk2
:cl_pvo(0.050):datavg[870] read pvid hdisk
:cl_pvo(0.051):datavg[871] hdisklist='hdisk2 hdisk3'
:cl_pvo(0.051):datavg[870] read pvid hdisk
:cl_pvo(0.051):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4'
:cl_pvo(0.051):datavg[870] read pvid hdisk
:cl_pvo(0.051):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5'
:cl_pvo(0.051):datavg[870] read pvid hdisk
:cl_pvo(0.051):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6'
:cl_pvo(0.051):datavg[870] read pvid hdisk
:cl_pvo(0.051):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7'
:cl_pvo(0.051):datavg[870] read pvid hdisk
:cl_pvo(0.051):datavg[871] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.051):datavg[870] read pvid hdisk
:cl_pvo(0.051):datavg[873] get_vg_mode 'hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8' 00c44af100004b00000001851e9dc053 datavg
:cl_pvo(0.051):datavg[get_vg_mode:289] typeset vgid vg_name syncflag hdisklist
:cl_pvo(0.051):datavg[get_vg_mode:290] typeset GROUP_NAME FORCED_VARYON
:cl_pvo(0.051):datavg[get_vg_mode:291] TUR_RC=0
:cl_pvo(0.051):datavg[get_vg_mode:291] typeset -li TUR_RC
:cl_pvo(0.051):datavg[get_vg_mode:292] vg_disks=0
:cl_pvo(0.051):datavg[get_vg_mode:292] typeset -li vg_disks
:cl_pvo(0.051):datavg[get_vg_mode:293] max_disk_test=0
:cl_pvo(0.051):datavg[get_vg_mode:293] typeset -li max_disk_test
:cl_pvo(0.051):datavg[get_vg_mode:294] disk_tested=0
:cl_pvo(0.051):datavg[get_vg_mode:294] typeset -li disk_tested
:cl_pvo(0.051):datavg[get_vg_mode:296] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.051):datavg[get_vg_mode:297] vgid=00c44af100004b00000001851e9dc053
:cl_pvo(0.051):datavg[get_vg_mode:298] vg_name=datavg
:cl_pvo(0.051):datavg[get_vg_mode:299] syncflag=''
:cl_pvo(0.051):datavg[get_vg_mode:301] odmget -q name='datavg and attribute=conc_capable and value=y' CuAt
:cl_pvo(0.052):datavg[get_vg_mode:301] ODMDIR=/etc/objrepos
:cl_pvo(0.054):datavg[get_vg_mode:301] [[ -n $'\nCuAt:\n\tname = "datavg"\n\tattribute = "conc_capable"\n\tvalue = "y"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "l"\n\tnls_index = 0' ]]
:cl_pvo(0.054):datavg[get_vg_mode:304] : If LVM thinks that this volume group is concurrent capable, that
:cl_pvo(0.054):datavg[get_vg_mode:305] : is good enough
:cl_pvo(0.054):datavg[get_vg_mode:307] mode=32
:cl_pvo(0.054):datavg[get_vg_mode:308] return
:cl_pvo(0.054):datavg[876] : See if the volume group is already on line. This should
:cl_pvo(0.054):datavg[877] : only happen if it were manually brought on line outside of HACMP
:cl_pvo(0.054):datavg[878] : control, or left on-line after a forced down.
:cl_pvo(0.054):datavg[880] vg_on_mode=''
:cl_pvo(0.054):datavg[880] typeset vg_on_mode
:cl_pvo(0.054):datavg[881] [[ 'caavg_private rootvg' == ?(*\ )datavg?(\ *) ]]
:cl_pvo(0.055):datavg[891] lsvg -L datavg
:cl_pvo(0.055):datavg[891] 2> /dev/null
:cl_pvo(0.057):datavg[891] grep -q -i -w passive-only
:cl_pvo(0.059):datavg[896] [[ -n '' ]]
:cl_pvo(0.059):datavg[976] : Volume group is currently not on line in any mode
:cl_pvo(0.059):datavg[978] (( 99 == 32 ))
:cl_pvo(0.060):datavg[1041] (( 32 != 32 && 99 != 32 ))
:cl_pvo(0.060):datavg[1060] (( 32 == 32 ))
:cl_pvo(0.060):datavg[1063] : If this is actually an enhanced concurrent mode volume group,
:cl_pvo(0.060):datavg[1064] : bring it on line in passive mode. Other kinds are just skipped.
:cl_pvo(0.060):datavg[1066] varyonp datavg 'hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.060):datavg[varyonp:417] NOQUORUM=20
:cl_pvo(0.060):datavg[varyonp:417] typeset -li NOQUORUM
:cl_pvo(0.060):datavg[varyonp:418] rc=0
:cl_pvo(0.060):datavg[varyonp:418] typeset -li rc
:cl_pvo(0.060):datavg[varyonp:421] : Pick up passed parameters: volume group and sync flag
:cl_pvo(0.060):datavg[varyonp:423] typeset syncflag hdisklist vg
:cl_pvo(0.060):datavg[varyonp:424] vg=datavg
:cl_pvo(0.060):datavg[varyonp:425] hdisklist='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
:cl_pvo(0.060):datavg[varyonp:426] syncflag=''
:cl_pvo(0.060):datavg[varyonp:429] : Make sure the volume group is not fenced. Varyon requires read write
:cl_pvo(0.060):datavg[varyonp:430] : access.
:cl_pvo(0.060):datavg[varyonp:432] cl_set_vg_fence_height -c datavg rw
cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0))
:cl_pvo(0.063):datavg[varyonp:433] RC=0
:cl_pvo(0.063):datavg[varyonp:434] (( 19 == 0 ))
:cl_pvo(0.063):datavg[varyonp:442] : Return code from volume group fencing for datavg is 0
:cl_pvo(0.063):datavg[varyonp:443] (( 0 != 0 ))
:cl_pvo(0.063):datavg[varyonp:455] : Try to vary on the volume group in passive concurrent mode
:cl_pvo(0.063):datavg[varyonp:457] varyonvg -c -P -O datavg
:cl_pvo(0.528):datavg[varyonp:458] rc=0
:cl_pvo(0.528):datavg[varyonp:460] (( 0 != 0 ))
:cl_pvo(0.528):datavg[varyonp:483] : exit status of varyonvg -c -P -O datavg is: 0
:cl_pvo(0.528):datavg[varyonp:485] (( 0 == 20 ))
:cl_pvo(0.528):datavg[varyonp:505] : If varyon was ultimately unsuccessful, note the error
:cl_pvo(0.528):datavg[varyonp:507] (( 0 != 0 ))
:cl_pvo(0.528):datavg[varyonp:511] : If varyonvg was successful, try to recover
:cl_pvo(0.528):datavg[varyonp:512] : any missing or removed disks
:cl_pvo(0.528):datavg[varyonp:514] mr_recovery datavg
:cl_pvo(0.528):datavg[mr_recovery:59] vg=datavg
:cl_pvo(0.528):datavg[mr_recovery:59] typeset vg
:cl_pvo(0.528):datavg[mr_recovery:60] typeset mr_disks
:cl_pvo(0.528):datavg[mr_recovery:61] typeset disk_list
:cl_pvo(0.528):datavg[mr_recovery:62] typeset hdisk
:cl_pvo(0.530):datavg[mr_recovery:64] lsvg -p datavg
:cl_pvo(0.530):datavg[mr_recovery:64] 2> /dev/null
:cl_pvo(0.531):datavg[mr_recovery:64] grep -iw missing
:cl_pvo(0.551):datavg[mr_recovery:64] missing_disks=''
:cl_pvo(0.551):datavg[mr_recovery:66] [[ -n '' ]]
:cl_pvo(0.553):datavg[mr_recovery:89] lsvg -p datavg
:cl_pvo(0.553):datavg[mr_recovery:89] 2> /dev/null
:cl_pvo(0.555):datavg[mr_recovery:89] grep -iw removed
:cl_pvo(0.574):datavg[mr_recovery:89] removed_disks=''
:cl_pvo(0.574):datavg[mr_recovery:91] [[ -n '' ]]
:cl_pvo(0.574):datavg[varyonp:518] : Restore the fence height to read only, for passive varyon
:cl_pvo(0.574):datavg[varyonp:520] cl_set_vg_fence_height -c datavg ro
cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=ro(2))
:cl_pvo(0.577):datavg[varyonp:521] RC=0
:cl_pvo(0.577):datavg[varyonp:522] : Return code from volume group fencing for datavg is 0
:cl_pvo(0.577):datavg[varyonp:523] (( 0 != 0 ))
:cl_pvo(0.577):datavg[varyonp:533] return 0
:cl_pvo(0.577):datavg[1073] return 0
:node_up[406] : exit status of cl_pvo is: 0
:node_up[422] ls '/dev/vpath*'
:node_up[422] 1> /dev/null 2>& 1
:node_up[432] : Configure any split and merge policies.
:node_up[434] rm -f /usr/es/sbin/cluster/etc/smm_oflag
:node_up[435] [[ -z '' ]]
:node_up[438] : If this is the first node up, configure split merge handling.
:node_up[440] cl_cfg_sm_rt
:cl_cfg_sm_rt[738] version=1.34
:cl_cfg_sm_rt[741] clctrl_rc=0
:cl_cfg_sm_rt[741] typeset -li clctrl_rc
:cl_cfg_sm_rt[742] src_rc=0
:cl_cfg_sm_rt[742] typeset -li src_rc
:cl_cfg_sm_rt[743] cl_migcheck_rc=0
:cl_cfg_sm_rt[743] typeset -li cl_migcheck_rc
:cl_cfg_sm_rt[744] bad_policy=''
:cl_cfg_sm_rt[745] SMP=''
:cl_cfg_sm_rt[748] : If we are in migration - if all nodes are not up to this level - do not
:cl_cfg_sm_rt[749] : attempt any configuration.
:cl_cfg_sm_rt[751] clmixver
:cl_cfg_sm_rt[751] version=22
:cl_cfg_sm_rt[752] (( 22 < 14 ))
:cl_cfg_sm_rt[761] : Retrieve configured policies
:cl_cfg_sm_rt[763] clodmget -q 'policy = action' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[763] Action=Reboot
:cl_cfg_sm_rt[764] clodmget -q 'policy = split' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[764] Split=None
:cl_cfg_sm_rt[765] clodmget -q 'policy = merge' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[765] Merge=Majority
:cl_cfg_sm_rt[766] clodmget -q 'policy = tiebreaker' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[766] TieBreaker=''
:cl_cfg_sm_rt[767] clodmget -q 'policy = nfs_quorumserver' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[767] nfs_quorumserver=''
:cl_cfg_sm_rt[768] clodmget -q 'policy = local_quorumdirectory' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[768] local_quorumdirectory=''
:cl_cfg_sm_rt[769] clodmget -q 'policy = remote_quorumdirectory' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[769] remote_quorumdirectory=''
:cl_cfg_sm_rt[770] clodmget -q 'policy = anhp' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[770] is_anhp=''
:cl_cfg_sm_rt[771] clodmget -q 'policy = scsi' -f value -n HACMPsplitmerge
:cl_cfg_sm_rt[771] is_scsi=''
:cl_cfg_sm_rt[772] clodmget -q name=clutils.log -f value -n HACMPlogs
:cl_cfg_sm_rt[772] CLUTILS_LOG=/var/hacmp/log/clutils.log
:cl_cfg_sm_rt[775] : If policies are unset, apply the default policies
:cl_cfg_sm_rt[777] Split=None
:cl_cfg_sm_rt[778] Merge=Majority
:cl_cfg_sm_rt[779] Action=Reboot
:cl_cfg_sm_rt[782] : If tiebreaker was a configured policy, be sure that one was defined
:cl_cfg_sm_rt[784] [[ -z '' ]]
:cl_cfg_sm_rt[786] [[ None == TieBreaker ]]
:cl_cfg_sm_rt[790] [[ Majority == TieBreaker ]]
:cl_cfg_sm_rt[795] [[ -n '' ]]
:cl_cfg_sm_rt[807] : Set up the interlock file for use by smcaactrl. This tells
:cl_cfg_sm_rt[808] : smcaactrl to allow the following CAA operations.
:cl_cfg_sm_rt[810] date
:cl_cfg_sm_rt[810] 1> /usr/es/sbin/cluster/etc/cl_cfg_sm_rt.28049740
:cl_cfg_sm_rt[811] trap 'on_exit $?' EXIT
:cl_cfg_sm_rt[814] : Setting up CAA tunable local_merge_policy
:cl_cfg_sm_rt[816] typeset -i caa_level
:cl_cfg_sm_rt[817] lslpp -l bos.cluster.rte
:cl_cfg_sm_rt[817] grep bos.cluster.rte
:cl_cfg_sm_rt[817] uniq
:cl_cfg_sm_rt[817] awk -F ' ' '{print $2}'
:cl_cfg_sm_rt[817] tr -d .
:cl_cfg_sm_rt[817] caa_level=725102
:cl_cfg_sm_rt[818] (( 725102 >=7140 ))
:cl_cfg_sm_rt[819] configure_local_merge_policy
:cl_cfg_sm_rt[configure_local_merge_policy:665] typeset -i clctrl_rc
:cl_cfg_sm_rt[configure_local_merge_policy:666] [[ -z '' ]]
:cl_cfg_sm_rt[configure_local_merge_policy:666] [[ -z '' ]]
:cl_cfg_sm_rt[configure_local_merge_policy:667] capability=0
:cl_cfg_sm_rt[configure_local_merge_policy:667] typeset -i capability
:cl_cfg_sm_rt[configure_local_merge_policy:669] cl_get_capabilities -i 6
:cl_cfg_sm_rt[configure_local_merge_policy:669] 2>& 1
:cl_cfg_sm_rt[configure_local_merge_policy:669] caa_sm_capability=$':cl_cfg_sm_rt[configure_local_merge_policy:669] LC_ALL=C\ncl_get_capabilities[178]: version 1.9\ncapability is 6\n\tid: 6 version: 1 flag: 1 '
:cl_cfg_sm_rt[configure_local_merge_policy:670] [[ -n $':cl_cfg_sm_rt[configure_local_merge_policy:669] LC_ALL=C\ncl_get_capabilities[178]: version 1.9\ncapability is 6\n\tid: 6 version: 1 flag: 1 ' ]]
:cl_cfg_sm_rt[configure_local_merge_policy:674] : If Sub Cluster Split Merge capability is defined
:cl_cfg_sm_rt[configure_local_merge_policy:675] : and globally available, then capability is set to 1
:cl_cfg_sm_rt[configure_local_merge_policy:677] capability='1 '
:cl_cfg_sm_rt[configure_local_merge_policy:680] (( 1 == 1 ))
:cl_cfg_sm_rt[configure_local_merge_policy:682] : Sub Cluster Split-Merge capability is available cluster wide
:cl_cfg_sm_rt[configure_local_merge_policy:684] [[ Majority != None ]]
:cl_cfg_sm_rt[configure_local_merge_policy:686] clctrl -tune -o local_merge_policy=h
1 tunable updated on cluster epprda_cluster.
:cl_cfg_sm_rt[configure_local_merge_policy:687] clctrl_rc=0
:cl_cfg_sm_rt[configure_local_merge_policy:688] (( 0 != 0 ))
:cl_cfg_sm_rt[configure_local_merge_policy:725] return 0
:cl_cfg_sm_rt[820] rc=0
:cl_cfg_sm_rt[820] typeset -i rc
:cl_cfg_sm_rt[821] (( 0 < 0 ))
:cl_cfg_sm_rt[827] : Configure CAA in accordance with the specified or defaulted policies
:cl_cfg_sm_rt[828] : for Merge
:cl_cfg_sm_rt[830] clctrl -tune -a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).communication_mode = u
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).config_timeout = 240
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).deadman_mode = a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).dr_enabled = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).link_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).local_merge_policy = h
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).network_fdt = 20000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).no_if_traffic_monitor = 0
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_down_delay = 10000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).packet_ttl = 32
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).remote_hb_factor = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).repos_mode = e
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).site_merge_policy = h
:cl_cfg_sm_rt[831] clctrl_rc=0
:cl_cfg_sm_rt[832] : Return code from 'clctrl -tune -a' is 0
:cl_cfg_sm_rt[835] : If the current deadman mode is not set to ASSERT,
:cl_cfg_sm_rt[836] : change it to that
:cl_cfg_sm_rt[842] clctrl -tune -x deadman_mode
:cl_cfg_sm_rt[842] cut -f2 -d:
:cl_cfg_sm_rt[842] current_deadman_mode=a
:cl_cfg_sm_rt[843] [[ a != a ]]
:cl_cfg_sm_rt[849] : Determine the current site merge policy, to see if it needs
:cl_cfg_sm_rt[850] : to be changed
:cl_cfg_sm_rt[852] clctrl -tune -x site_merge_policy
:cl_cfg_sm_rt[852] cut -f2 -d:
:cl_cfg_sm_rt[852] current_merge_policy=h
:cl_cfg_sm_rt[854] [[ Majority == Manual ]]
:cl_cfg_sm_rt[865] [[ Majority == None ]]
:cl_cfg_sm_rt[878] : Everything else - tie breaker, majority, nfs - is heuristic merge policy
:cl_cfg_sm_rt[880] [[ h != h ]]
:cl_cfg_sm_rt[886] clctrl_rc=0
:cl_cfg_sm_rt[887] (( 0 != 0 ))
:cl_cfg_sm_rt[901] [[ -n '' ]]
:cl_cfg_sm_rt[919] RSCT_START_RETRIES=0
:cl_cfg_sm_rt[919] typeset -li RSCT_START_RETRIES
:cl_cfg_sm_rt[920] MIN_RSCT_RETRIES=1
:cl_cfg_sm_rt[920] typeset -li MIN_RSCT_RETRIES
:cl_cfg_sm_rt[921] MAX_RSCT_RETRIES=15
:cl_cfg_sm_rt[921] typeset -li MAX_RSCT_RETRIES
:cl_cfg_sm_rt[922] grep ^RSCT_START_RETRIES /etc/environment
:cl_cfg_sm_rt[922] eval
:cl_cfg_sm_rt[923] (( 0 < 1 ))
:cl_cfg_sm_rt[923] RSCT_START_RETRIES=1
:cl_cfg_sm_rt[924] (( 1 > 15 ))
:cl_cfg_sm_rt[926] RSCT_TB_WAITTIME=0
:cl_cfg_sm_rt[926] typeset -li RSCT_TB_WAITTIME
:cl_cfg_sm_rt[927] grep ^RSCT_TB_WAITTIME /etc/environment
:cl_cfg_sm_rt[927] eval
:cl_cfg_sm_rt[928] (( 0 <= 0 ))
:cl_cfg_sm_rt[928] RSCT_TB_WAITTIME=30
:cl_cfg_sm_rt[930] RSCT_START_WAIT=0
:cl_cfg_sm_rt[930] typeset -li RSCT_START_WAIT
:cl_cfg_sm_rt[931] MIN_RSCT_WAIT=10
:cl_cfg_sm_rt[931] typeset -li MIN_RSCT_WAIT
:cl_cfg_sm_rt[932] MAX_RSCT_WAIT=60
:cl_cfg_sm_rt[932] typeset -li MAX_RSCT_WAIT
:cl_cfg_sm_rt[933] grep ^RSCT_START_WAIT /etc/environment
:cl_cfg_sm_rt[933] eval
:cl_cfg_sm_rt[934] (( 0 < 10 ))
:cl_cfg_sm_rt[934] RSCT_START_WAIT=10
:cl_cfg_sm_rt[935] (( 10 > 60 ))
:cl_cfg_sm_rt[937] (( retries=0))
:cl_cfg_sm_rt[937] (( 0 < 1))
:cl_cfg_sm_rt[939] lsrsrc IBM.PeerNode
:cl_cfg_sm_rt[939] 1>> /var/hacmp/log/clutils.log 2>& 1
:cl_cfg_sm_rt[941] break
:cl_cfg_sm_rt[947] (( 0 >= 1 ))
:cl_cfg_sm_rt[954] : Configure RSCT in accordance with the specified or defaulted policies
:cl_cfg_sm_rt[955] : for Split
:cl_cfg_sm_rt[965] CT_MANAGEMENT_SCOPE=2
:cl_cfg_sm_rt[965] export CT_MANAGEMENT_SCOPE
:cl_cfg_sm_rt[966] lsrsrc -t -c -x IBM.PeerNode OpQuorumTieBreaker
:cl_cfg_sm_rt[966] Current_TB='"Success" '
:cl_cfg_sm_rt[967] Current_TB='"Success'
:cl_cfg_sm_rt[968] Current_TB=Success
:cl_cfg_sm_rt[969] [[ None == None ]]
:cl_cfg_sm_rt[971] [[ Success == Success ]]
:cl_cfg_sm_rt[973] chrsrc -c IBM.PeerNode OpQuorumTieBreaker=Operator
:cl_cfg_sm_rt[974] src_rc=0
:cl_cfg_sm_rt[975] (( 0 != 0 ))
:cl_cfg_sm_rt[981] (( 0 == 0 ))
:cl_cfg_sm_rt[983] chrsrc -s Name='="Success"' IBM.TieBreaker PostReserveWaitTime=30
:cl_cfg_sm_rt[984] src_rc=0
:cl_cfg_sm_rt[985] (( 0 != 0 ))
:cl_cfg_sm_rt[990] chrsrc -c IBM.PeerNode OpQuorumTieBreaker=Success
:cl_cfg_sm_rt[991] src_rc=0
:cl_cfg_sm_rt[992] (( 0 != 0 ))
:cl_cfg_sm_rt[1044] src_rc=0
:cl_cfg_sm_rt[1045] (( 0 != 0 ))
:cl_cfg_sm_rt[1053] : Configure RSCT Action
:cl_cfg_sm_rt[1055] chrsrc -c IBM.PeerNode QuorumType=4
:cl_cfg_sm_rt[1056] src_rc=0
:cl_cfg_sm_rt[1057] (( 0 != 0 ))
:cl_cfg_sm_rt[1064] chrsrc -c IBM.PeerNode CriticalMode=2
:cl_cfg_sm_rt[1065] src_rc=0
:cl_cfg_sm_rt[1066] (( 0 != 0 ))
:cl_cfg_sm_rt[1073] [[ Reboot == Reboot ]]
:cl_cfg_sm_rt[1075] chrsrc -c IBM.PeerNode CritRsrcProtMethod=1
:cl_cfg_sm_rt[1077] src_rc=0
:cl_cfg_sm_rt[1078] (( 0 != 0 ))
:cl_cfg_sm_rt[1086] : Configure RSCT Critical Resource Daemon Grace Period for cluster level.
:cl_cfg_sm_rt[1088] typeset grace_period
:cl_cfg_sm_rt[1089] clodmget -f crit_daemon_restart_grace_period HACMPcluster
:cl_cfg_sm_rt[1089] grace_period=60
:cl_cfg_sm_rt[1090] lsrsrc -c IBM.PeerNode
:cl_cfg_sm_rt[1090] LC_ALL=C
:cl_cfg_sm_rt[1090] grep CritDaemonRestartGracePeriod
:cl_cfg_sm_rt[1090] awk -F= '{print $2}'
:cl_cfg_sm_rt[1090] rsct_grace_period=' 60'
:cl_cfg_sm_rt[1091] [[ -n ' 60' ]]
:cl_cfg_sm_rt[1092] (( 60 != 60 ))
:cl_cfg_sm_rt[1104] : Configure RSCT Critical Resource Daemon Grace Period for node level.
:cl_cfg_sm_rt[1106] typeset node_grace_period
:cl_cfg_sm_rt[1107] typeset node_list
:cl_cfg_sm_rt[1108] typeset rsct_node_grace_period
:cl_cfg_sm_rt[1110] : Get the CAA active nodes list
:cl_cfg_sm_rt[1112] lscluster -m
:cl_cfg_sm_rt[1112] grep -p 'State of node: UP'
:cl_cfg_sm_rt[1112] grep -w 'Node name:'
:cl_cfg_sm_rt[1112] cut -f2 -d:
:cl_cfg_sm_rt[1112] node_list=$' epprda\n epprds'
:cl_cfg_sm_rt[1115] clodmget -n -q object='COMMUNICATION_PATH and value=epprda' -f name HACMPnode
:cl_cfg_sm_rt[1115] host_name=epprda
:cl_cfg_sm_rt[1116] clodmget -n -q object='CRIT_DAEMON_RESTART_GRACE_PERIOD and name=epprda' -f value HACMPnode
:cl_cfg_sm_rt[1116] node_grace_period=''
:cl_cfg_sm_rt[1117] [[ -n '' ]]
:cl_cfg_sm_rt[1115] clodmget -n -q object='COMMUNICATION_PATH and value=epprds' -f name HACMPnode
:cl_cfg_sm_rt[1115] host_name=epprds
:cl_cfg_sm_rt[1116] clodmget -n -q object='CRIT_DAEMON_RESTART_GRACE_PERIOD and name=epprds' -f value HACMPnode
:cl_cfg_sm_rt[1116] node_grace_period=''
:cl_cfg_sm_rt[1117] [[ -n '' ]]
:cl_cfg_sm_rt[1134] : Success exit. Display the CAA and RSCT configuration
:cl_cfg_sm_rt[1136] clctrl -tune -a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).communication_mode = u
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).config_timeout = 240
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).deadman_mode = a
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).dr_enabled = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).link_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).local_merge_policy = h
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).network_fdt = 20000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).no_if_traffic_monitor = 0
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_down_delay = 10000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).node_timeout = 30000
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).packet_ttl = 32
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).remote_hb_factor = 1
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).repos_mode = e
epprda_cluster(f43c91c2-9ee2-11ed-8018-fae6134ea920).site_merge_policy = h
:cl_cfg_sm_rt[1137] lscluster -m
Calling node query for all nodes...
Node query number of nodes examined: 2
Node name: epprda
Cluster shorthand id for node: 1
UUID for node: f42873b8-9ee2-11ed-8018-fae6134ea920
State of node: UP NODE_LOCAL
Reason: NONE
Smoothed rtt to node: 0
Mean Deviation in network rtt to node: 0
Number of clusters node is a member in: 1
CLUSTER NAME SHID UUID
epprda_cluster 0 f43c91c2-9ee2-11ed-8018-fae6134ea920
SITE NAME SHID UUID
LOCAL 1 51735173-5173-5173-5173-517351735173
Points of contact for node: 0
----------------------------------------------------------------------------
Node name: epprds
Cluster shorthand id for node: 2
UUID for node: f42873fe-9ee2-11ed-8018-fae6134ea920
State of node: UP
Reason: NONE
Smoothed rtt to node: 8
Mean Deviation in network rtt to node: 3
Number of clusters node is a member in: 1
CLUSTER NAME SHID UUID
epprda_cluster 0 f43c91c2-9ee2-11ed-8018-fae6134ea920
SITE NAME SHID UUID
LOCAL 1 51735173-5173-5173-5173-517351735173
Points of contact for node: 1
-----------------------------------------------------------------------
Interface State Protocol Status SRC_IP->DST_IP
-----------------------------------------------------------------------
tcpsock->02 UP IPv4 none 61.81.244.134->61.81.244.123
:cl_cfg_sm_rt[1138] lsrsrc -x -A b IBM.PeerNode
resource 1:
Name = "epprds"
NodeList = {2}
RSCTVersion = "3.2.6.4"
ClassVersions = {}
CritRsrcProtMethod = 0
IsQuorumNode = 1
IsPreferredGSGL = 1
NodeUUID = "f42873fe-9ee2-11ed-8018-fae6134ea920"
HostName = "epprds"
TBPriority = 0
CritDaemonRestartGracePeriod = -1
ActivePeerDomain = "epprda_cluster"
NodeNameList = {"epprds"}
OpState = 1
ConfigChanged = 1
CritRsrcActive = 0
OpUsabilityState = 1
MaintenanceState = 0
resource 2:
Name = "epprda"
NodeList = {1}
RSCTVersion = "3.2.6.4"
ClassVersions = {}
CritRsrcProtMethod = 0
IsQuorumNode = 1
IsPreferredGSGL = 1
NodeUUID = "f42873b8-9ee2-11ed-8018-fae6134ea920"
HostName = "epprda"
TBPriority = 0
CritDaemonRestartGracePeriod = -1
ActivePeerDomain = "epprda_cluster"
NodeNameList = {"epprda"}
OpState = 1
ConfigChanged = 1
CritRsrcActive = 0
OpUsabilityState = 1
MaintenanceState = 0
:cl_cfg_sm_rt[1139] lsrsrc -x -c -A b IBM.PeerNode
resource 1:
CommittedRSCTVersion = "3.2.2.0"
ActiveVersionChanging = 0
OpQuorumOverride = 0
CritRsrcProtMethod = 1
OpQuorumTieBreaker = "Success"
QuorumType = 4
QuorumGroupName = ""
Fanout = 32
OpFenceGroup = ""
NodeCleanupCommand = ""
NodeCleanupCriteria = ""
QuorumLessStartupTimeout = 120
CriticalMode = 2
NotifyQuorumChangedCommand = ""
NamePolicy = 1
LiveUpdateOptions = ""
QuorumNotificationRespWaitTime = 0
MaintenanceModeConfig = ""
CritDaemonRestartGracePeriod = 60
:cl_cfg_sm_rt[1141] return 0
:cl_cfg_sm_rt[1] on_exit 0
:node_up[441] : exit status of cl_cfg_sm_rt is 0
:node_up[498] : Enable NFS crossmounts during manual start
:node_up[500] [[ -n false ]]
:node_up[500] [[ false == true ]]
:node_up[607] : When RG dependencies are not configured we call node_up_local/remote,
:node_up[608] : followed by process_resources to process any remaining groups
:node_up[610] [[ TRUE == FALSE ]]
:node_up[657] [[ epprda == epprda ]]
:node_up[660] : Perform any deferred TCP daemon startup, if necessary,
:node_up[661] : along with any necessary start up of iSCSI devices.
:node_up[663] cl_telinit
:cl_telinit[178] version=%I%
:cl_telinit[182] TELINIT_FILE=/usr/es/sbin/cluster/.telinit
:cl_telinit[183] USE_TELINIT_FILE=/usr/es/sbin/cluster/.use_telinit
:cl_telinit[185] [[ -f /usr/es/sbin/cluster/.use_telinit ]]
:cl_telinit[189] USE_TELINIT=0
:cl_telinit[198] [[ '' == -boot ]]
:cl_telinit[236] cl_lsitab clinit
:cl_telinit[236] 1> /dev/null 2>& 1
:cl_telinit[239] : telinit a disabled
:cl_telinit[241] return 0
:node_up[664] : exit status of cl_telinit is: 0
:node_up[667] return 0
Jan 28 2023 18:03:36 EVENT COMPLETED: node_up epprda 0
|2023-01-28T18:03:36|8560|EVENT COMPLETED: node_up epprda 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:03:36.185464
+ echo '|2023-01-28T18:03:36.185464|INFO: node_up|epprda|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 18:03:38 EVENT START: rg_move_fence epprda 1
|2023-01-28T18:03:38|8561|EVENT START: rg_move_fence epprda 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:03:38.399965
+ echo '|2023-01-28T18:03:38.399965|INFO: rg_move_fence|epprd_rg|epprda|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_fence[62] [[ high == high ]]
:rg_move_fence[62] version=1.11
:rg_move_fence[63] NODENAME=epprda
:rg_move_fence[63] export NODENAME
:rg_move_fence[65] set -u
:rg_move_fence[67] [ 2 != 2 ]
:rg_move_fence[73] set +u
:rg_move_fence[75] [[ -z TRUE ]]
:rg_move_fence[80] [[ TRUE == TRUE ]]
:rg_move_fence[82] LOCAL_NODENAME=epprda
:rg_move_fence[83] odmget -qid=1 HACMPgroup
:rg_move_fence[83] egrep 'group ='
:rg_move_fence[83] awk '{print $3}'
:rg_move_fence[83] eval RGNAME='"epprd_rg"'
:rg_move_fence[1] RGNAME=epprd_rg
+epprd_rg:rg_move_fence[84] GROUPNAME=epprd_rg
+epprd_rg:rg_move_fence[85] group_state='$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[86] set +u
+epprd_rg:rg_move_fence[87] eval print '$RESGRP_epprd_rg_epprda'
+epprd_rg:rg_move_fence[1] print
+epprd_rg:rg_move_fence[87] RG_MOVE_ONLINE=''
+epprd_rg:rg_move_fence[87] export RG_MOVE_ONLINE
+epprd_rg:rg_move_fence[88] set -u
+epprd_rg:rg_move_fence[89] RG_MOVE_ONLINE=TMP_ERROR
+epprd_rg:rg_move_fence[91] set -a
+epprd_rg:rg_move_fence[92] clsetenvgrp epprda rg_move epprd_rg ''
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
+epprd_rg:rg_move_fence[92] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[93] RC=0
+epprd_rg:rg_move_fence[94] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
+epprd_rg:rg_move_fence[1] FORCEDOWN_GROUPS=''
+epprd_rg:rg_move_fence[2] RESOURCE_GROUPS=''
+epprd_rg:rg_move_fence[3] HOMELESS_GROUPS=''
+epprd_rg:rg_move_fence[4] HOMELESS_FOLLOWER_GROUPS=''
+epprd_rg:rg_move_fence[5] ERRSTATE_GROUPS=''
+epprd_rg:rg_move_fence[6] PRINCIPAL_ACTIONS=''
+epprd_rg:rg_move_fence[7] ASSOCIATE_ACTIONS=''
+epprd_rg:rg_move_fence[8] AUXILLIARY_ACTIONS=''
+epprd_rg:rg_move_fence[8] SIBLING_GROUPS=''
+epprd_rg:rg_move_fence[9] SIBLING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[10] SIBLING_ACQUIRING_GROUPS=''
+epprd_rg:rg_move_fence[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[12] SIBLING_RELEASING_GROUPS=''
+epprd_rg:rg_move_fence[13] SIBLING_RELEASING_NODES_BY_GROUP=''
+epprd_rg:rg_move_fence[95] set +a
+epprd_rg:rg_move_fence[96] [ 0 -ne 0 ]
+epprd_rg:rg_move_fence[103] process_resources FENCE
:rg_move_fence[3318] version=1.169
:rg_move_fence[3321] STATUS=0
:rg_move_fence[3322] sddsrv_off=FALSE
:rg_move_fence[3324] true
:rg_move_fence[3326] : call rgpa, and it will tell us what to do next
:rg_move_fence[3328] set -a
:rg_move_fence[3329] clRGPA FENCE
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa FENCE
2023-01-28T18:03:38.503319 clrgpa
:clRGPA[+55] exit 0
:rg_move_fence[3329] eval JOB_TYPE=NONE
:rg_move_fence[1] JOB_TYPE=NONE
:rg_move_fence[3330] RC=0
:rg_move_fence[3331] set +a
:rg_move_fence[3333] (( 0 != 0 ))
:rg_move_fence[3342] RESOURCE_GROUPS=''
:rg_move_fence[3343] GROUPNAME=''
:rg_move_fence[3343] export GROUPNAME
:rg_move_fence[3353] IS_SERVICE_START=1
:rg_move_fence[3354] IS_SERVICE_STOP=1
:rg_move_fence[3360] [[ NONE == RELEASE ]]
:rg_move_fence[3360] [[ NONE == ONLINE ]]
:rg_move_fence[3729] break
:rg_move_fence[3740] : If sddsrv was turned off above, turn it back on again
:rg_move_fence[3742] [[ FALSE == TRUE ]]
:rg_move_fence[3747] exit 0
+epprd_rg:rg_move_fence[104] : exit status of process_resources FENCE is: 0
+epprd_rg:rg_move_fence[107] [[ TRUE == TRUE ]]
+epprd_rg:rg_move_fence[109] export EVENT_TYPE
+epprd_rg:rg_move_fence[110] echo ACQUIRE_PRIMARY
ACQUIRE_PRIMARY
+epprd_rg:rg_move_fence[111] [[ -n '' ]]
+epprd_rg:rg_move_fence[141] exit 0
Jan 28 2023 18:03:38 EVENT COMPLETED: rg_move_fence epprda 1 0
|2023-01-28T18:03:38|8561|EVENT COMPLETED: rg_move_fence epprda 1 0|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:03:38.595696
+ echo '|2023-01-28T18:03:38.595696|INFO: rg_move_fence|epprd_rg|epprda|1|0'
+ 1>> /var/hacmp/availability/clavailability.log
Jan 28 2023 18:03:38 EVENT START: rg_move_acquire epprda 1
|2023-01-28T18:03:38|8561|EVENT START: rg_move_acquire epprda 1|
+ clcycle clavailability.log
+ 1> /dev/null 2>& 1
+ cltime
+ DATE=2023-01-28T18:03:38.789540
+ echo '|2023-01-28T18:03:38.789540|INFO: rg_move_acquire|epprd_rg|epprda|1'
+ 1>> /var/hacmp/availability/clavailability.log
:rg_move_acquire[+54] [[ high == high ]]
:rg_move_acquire[+54] version=1.9.1.7
:rg_move_acquire[+57] set -u
:rg_move_acquire[+59] [ 2 != 2 ]
:rg_move_acquire[+65] set +u
:rg_move_acquire[+67] :rg_move_acquire[+67] clodmget -n -q id=1 -f group HACMPgroup
RG=epprd_rg
:rg_move_acquire[+68] export RG
:rg_move_acquire[+70] [[ ACQUIRE_PRIMARY == ACQUIRE_PRIMARY ]]
:rg_move_acquire[+75] typeset -i anhp_ret=0
:rg_move_acquire[+76] typeset -i scsi_ret=0
:rg_move_acquire[+78] clodmget -n -q policy = anhp -f value HACMPsplitmerge
:rg_move_acquire[+78] typeset ANHP_ENABLED=
:rg_move_acquire[+78] [[ == Yes ]]
:rg_move_acquire[+87] clodmget -n -q policy = scsi -f value HACMPsplitmerge
:rg_move_acquire[+87] typeset SCSIPR_ENABLED=
:rg_move_acquire[+87] [[ == Yes ]]
:rg_move_acquire[+106] (( 0 == 1 && 0 == 1 ))
:rg_move_acquire[+109] (( 0 == 1 && 0 == 0 ))
:rg_move_acquire[+112] (( 0 == 1 && 0 == 0 ))
:rg_move_acquire[+118] clcallev rg_move epprda 1 ACQUIRE
Jan 28 2023 18:03:38 EVENT START: rg_move epprda 1 ACQUIRE
|2023-01-28T18:03:38|8561|EVENT START: rg_move epprda 1 ACQUIRE|
:clevlog[amlog_trace:318] clcycle clavailability.log
:clevlog[amlog_trace:318] 1> /dev/null 2>& 1
:clevlog[amlog_trace:319] cltime
:clevlog[amlog_trace:319] DATE=2023-01-28T18:03:38.920496
:clevlog[amlog_trace:320] echo '|2023-01-28T18:03:38.920496|INFO: rg_move|epprd_rg|epprda|1|ACQUIRE'
:clevlog[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
:get_local_nodename[48] version=1.2.1.28
:get_local_nodename[52] : cllsclstr -N will return the local node if not configured in HACMPcluster
:get_local_nodename[54] ODMDIR=/etc/es/objrepos
:get_local_nodename[54] export ODMDIR
:get_local_nodename[55] nodename=''
:get_local_nodename[55] typeset nodename
:get_local_nodename[56] cllsclstr -N
:get_local_nodename[56] nodename=epprda
:get_local_nodename[57] rc=0
:get_local_nodename[57] typeset -i rc
:get_local_nodename[58] (( 0 != 0 ))
:get_local_nodename[61] : If the node name in HACMPcluster matches a configured node, we are done.
:get_local_nodename[63] clnodename
:get_local_nodename[63] grep -w epprda
:get_local_nodename[63] [[ -n epprda ]]
:get_local_nodename[65] print -- epprda
:get_local_nodename[66] exit 0
:rg_move[76] version=%I%
:rg_move[86] STATUS=0
:rg_move[88] [[ ! -n '' ]]
:rg_move[90] EMULATE=REAL
:rg_move[96] set -u
:rg_move[98] NODENAME=epprda
:rg_move[98] export NODENAME
:rg_move[99] RGID=1
:rg_move[100] (( 3 == 3 ))
:rg_move[102] ACTION=ACQUIRE
:rg_move[108] : serial number for this event is 8561
:rg_move[112] RG_UP_POSTEVENT_ON_NODE=epprda
:rg_move[112] export RG_UP_POSTEVENT_ON_NODE
:rg_move[116] clodmget -qid=1 -f group -n HACMPgroup
:rg_move[116] eval RGNAME=epprd_rg
:rg_move[1] RGNAME=epprd_rg
:rg_move[118] UPDATESTATD=0
:rg_move[119] export UPDATESTATD
:rg_move[123] RG_MOVE_EVENT=true
:rg_move[123] export RG_MOVE_EVENT
:rg_move[128] group_state='$RESGRP_epprd_rg_epprda'
:rg_move[129] set +u
:rg_move[130] eval print '$RESGRP_epprd_rg_epprda'
:rg_move[1] print
:rg_move[130] RG_MOVE_ONLINE=''
:rg_move[130] export RG_MOVE_ONLINE
:rg_move[131] set -u
:rg_move[132] RG_MOVE_ONLINE=TMP_ERROR
:rg_move[139] rm -f /tmp/.NFSSTOPPED
:rg_move[140] rm -f /tmp/.RPCLOCKDSTOPPED
:rg_move[147] set -a
:rg_move[148] clsetenvgrp epprda rg_move epprd_rg
:clsetenvgrp[+49] [[ high = high ]]
:clsetenvgrp[+49] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clsetenvgrp.sh 1$
:clsetenvgrp[+51] usingVer=clSetenvgrp
:clsetenvgrp[+56] clSetenvgrp epprda rg_move epprd_rg
executing clSetenvgrp
clSetenvgrp completed successfully
:clsetenvgrp[+57] exit 0
:rg_move[148] clsetenvgrp_output=FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[149] RC=0
:rg_move[150] eval FORCEDOWN_GROUPS=$'"" \nRESOURCE_GROUPS="" \nHOMELESS_GROUPS="" \nHOMELESS_FOLLOWER_GROUPS="" \nERRSTATE_GROUPS="" \nPRINCIPAL_ACTIONS="" \nASSOCIATE_ACTIONS="" \nAUXILLIARY_ACTIONS="" SIBLING_GROUPS=""\nSIBLING_NODES_BY_GROUP=""\nSIBLING_ACQUIRING_GROUPS=""\nSIBLING_ACQUIRING_NODES_BY_GROUP=""\nSIBLING_RELEASING_GROUPS=""\nSIBLING_RELEASING_NODES_BY_GROUP=""\n '
:rg_move[1] FORCEDOWN_GROUPS=''
:rg_move[2] RESOURCE_GROUPS=''
:rg_move[3] HOMELESS_GROUPS=''
:rg_move[4] HOMELESS_FOLLOWER_GROUPS=''
:rg_move[5] ERRSTATE_GROUPS=''
:rg_move[6] PRINCIPAL_ACTIONS=''
:rg_move[7] ASSOCIATE_ACTIONS=''
:rg_move[8] AUXILLIARY_ACTIONS=''
:rg_move[8] SIBLING_GROUPS=''
:rg_move[9] SIBLING_NODES_BY_GROUP=''
:rg_move[10] SIBLING_ACQUIRING_GROUPS=''
:rg_move[11] SIBLING_ACQUIRING_NODES_BY_GROUP=''
:rg_move[12] SIBLING_RELEASING_GROUPS=''
:rg_move[13] SIBLING_RELEASING_NODES_BY_GROUP=''
:rg_move[151] set +a
:rg_move[155] (( 0 != 0 ))
:rg_move[155] [[ -z epprd_rg ]]
:rg_move[164] [[ -z TRUE ]]
:rg_move[241] AM_SYNC_CALLED_BY=RG_MOVE
:rg_move[241] export AM_SYNC_CALLED_BY
:rg_move[242] process_resources
:process_resources[3318] version=1.169
:process_resources[3321] STATUS=0
:process_resources[3322] sddsrv_off=FALSE
:process_resources[3324] true
:process_resources[3326] : call rgpa, and it will tell us what to do next
:process_resources[3328] set -a
:process_resources[3329] clRGPA
:clRGPA[+47] [[ high = high ]]
:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
:clRGPA[+49] usingVer=clrgpa
:clRGPA[+54] clrgpa
2023-01-28T18:03:39.040868 clrgpa
:clRGPA[+55] exit 0
:process_resources[3329] eval JOB_TYPE=ACQUIRE RESOURCE_GROUPS='"epprd_rg"' PRINCIPAL_ACTION='"ACQUIRE"' AUXILLIARY_ACTION='"NONE"'
:process_resources[1] JOB_TYPE=ACQUIRE
:process_resources[1] RESOURCE_GROUPS=epprd_rg
:process_resources[1] PRINCIPAL_ACTION=ACQUIRE
:process_resources[1] AUXILLIARY_ACTION=NONE
:process_resources[3330] RC=0
:process_resources[3331] set +a
:process_resources[3333] (( 0 != 0 ))
:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[3360] [[ ACQUIRE == ONLINE ]]
+epprd_rg:process_resources[3652] set_resource_group_state ACQUIRING
+epprd_rg:process_resources[set_resource_group_state:82] PS4_FUNC=set_resource_group_state
+epprd_rg:process_resources[set_resource_group_state:82] typeset PS4_FUNC
+epprd_rg:process_resources[set_resource_group_state:83] [[ high == high ]]
+epprd_rg:process_resources[set_resource_group_state:83] set -x
+epprd_rg:process_resources[set_resource_group_state:84] STAT=0
+epprd_rg:process_resources[set_resource_group_state:85] new_status=ACQUIRING
+epprd_rg:process_resources[set_resource_group_state:89] export GROUPNAME
+epprd_rg:process_resources[set_resource_group_state:90] [[ ACQUIRING != DOWN ]]
+epprd_rg:process_resources[set_resource_group_state:92] clchdaemons -d clstrmgr_scripts -t resource_locator -n epprda -o epprd_rg -v ACQUIRING
+epprd_rg:process_resources[set_resource_group_state:100] : Resource Manager Updates
+epprd_rg:process_resources[set_resource_group_state:105] amlog_trace '' 'acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:318] clcycle clavailability.log
+epprd_rg:process_resources[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:process_resources[amlog_trace:319] cltime
+epprd_rg:process_resources[amlog_trace:319] DATE=2023-01-28T18:03:39.075279
+epprd_rg:process_resources[amlog_trace:320] echo '|2023-01-28T18:03:39.075279|INFO: acquire|epprd_rg|epprda'
+epprd_rg:process_resources[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:process_resources[set_resource_group_state:106] cl_RMupdate acquiring epprd_rg process_resources
2023-01-28T18:03:39.099108
2023-01-28T18:03:39.103646
+epprd_rg:process_resources[set_resource_group_state:153] return 0
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:03:39.115492 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=WPAR ACTION=ACQUIRE RESOURCE_GROUPS='"epprd_rg' '"'
+epprd_rg:process_resources[1] JOB_TYPE=WPAR
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ WPAR == RELEASE ]]
+epprd_rg:process_resources[3360] [[ WPAR == ONLINE ]]
+epprd_rg:process_resources[3492] process_wpars ACQUIRE
+epprd_rg:process_resources[process_wpars:3265] PS4_FUNC=process_wpars
+epprd_rg:process_resources[process_wpars:3265] typeset PS4_FUNC
+epprd_rg:process_resources[process_wpars:3266] [[ high == high ]]
+epprd_rg:process_resources[process_wpars:3266] set -x
+epprd_rg:process_resources[process_wpars:3267] STAT=0
+epprd_rg:process_resources[process_wpars:3268] action=ACQUIRE
+epprd_rg:process_resources[process_wpars:3268] typeset action
+epprd_rg:process_resources[process_wpars:3272] export GROUPNAME
+epprd_rg:process_resources[process_wpars:3275] clstart_wpar
+epprd_rg:clstart_wpar[180] version=1.12.1.1
+epprd_rg:clstart_wpar[184] [[ rg_move == reconfig_resource_acquire ]]
+epprd_rg:clstart_wpar[184] [[ ACQUIRE_PRIMARY == reconfig_resource_acquire ]]
+epprd_rg:clstart_wpar[193] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clstart_wpar[193] [[ -z '' ]]
+epprd_rg:clstart_wpar[193] exit 0
+epprd_rg:process_resources[process_wpars:3276] RC=0
+epprd_rg:process_resources[process_wpars:3285] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[process_wpars:3294] return 0
+epprd_rg:process_resources[3493] RC=0
+epprd_rg:process_resources[3495] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:03:39.145721 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=SERVICE_LABELS ACTION=ACQUIRE IP_LABELS='"epprd"' RESOURCE_GROUPS='"epprd_rg' '"' COMMUNICATION_LINKS='""'
+epprd_rg:process_resources[1] JOB_TYPE=SERVICE_LABELS
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] IP_LABELS=epprd
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] COMMUNICATION_LINKS=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ SERVICE_LABELS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ SERVICE_LABELS == ONLINE ]]
+epprd_rg:process_resources[3407] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[3409] acquire_service_labels
+epprd_rg:process_resources[acquire_service_labels:3083] PS4_FUNC=acquire_service_labels
+epprd_rg:process_resources[acquire_service_labels:3083] typeset PS4_FUNC
+epprd_rg:process_resources[acquire_service_labels:3084] [[ high == high ]]
+epprd_rg:process_resources[acquire_service_labels:3084] set -x
+epprd_rg:process_resources[acquire_service_labels:3085] STAT=0
+epprd_rg:process_resources[acquire_service_labels:3086] clcallev acquire_service_addr
Jan 28 2023 18:03:39 EVENT START: acquire_service_addr
|2023-01-28T18:03:39|8561|EVENT START: acquire_service_addr |
+epprd_rg:acquire_service_addr[416] version=1.74.1.5
+epprd_rg:acquire_service_addr[423] [[ SERVICE_LABELS != 0 ]]
+epprd_rg:acquire_service_addr[423] [[ SERVICE_LABELS != GROUP ]]
+epprd_rg:acquire_service_addr[424] PROC_RES=true
+epprd_rg:acquire_service_addr[440] saveNSORDER=UNDEFINED
+epprd_rg:acquire_service_addr[441] NSORDER=local
+epprd_rg:acquire_service_addr[442] export NSORDER
+epprd_rg:acquire_service_addr[445] cl_RMupdate resource_acquiring All_service_addrs acquire_service_addr
2023-01-28T18:03:39.227529
2023-01-28T18:03:39.231726
+epprd_rg:acquire_service_addr[452] export GROUPNAME
+epprd_rg:acquire_service_addr[458] [[ true == true ]]
+epprd_rg:acquire_service_addr[459] get_list_head epprd
+epprd_rg:acquire_service_addr[459] read SERVICELABELS
+epprd_rg:acquire_service_addr[460] get_list_tail epprd
+epprd_rg:acquire_service_addr[460] read IP_LABELS
+epprd_rg:acquire_service_addr[471] clgetif -a epprd
+epprd_rg:acquire_service_addr[471] 2> /dev/null
+epprd_rg:acquire_service_addr[472] (( 3 != 0 ))
+epprd_rg:acquire_service_addr[477] cllsif -J '~' -Sn epprd
+epprd_rg:acquire_service_addr[477] cut -d~ -f3
+epprd_rg:acquire_service_addr[477] uniq
+epprd_rg:acquire_service_addr[477] NETWORK=net_ether_01
+epprd_rg:acquire_service_addr[478] cllsif -J '~' -Si epprda
+epprd_rg:acquire_service_addr[478] awk -F~ -v NET=net_ether_01 '{if ($2 == "boot" && $3 == NET) print $1}'
+epprd_rg:acquire_service_addr[478] sort
+epprd_rg:acquire_service_addr[478] boot_list=epprda
+epprd_rg:acquire_service_addr[480] [[ -z epprda ]]
+epprd_rg:acquire_service_addr[492] best_boot_addr net_ether_01 epprda
+epprd_rg:acquire_service_addr[best_boot_addr:106] NETWORK=net_ether_01
+epprd_rg:acquire_service_addr[best_boot_addr:106] typeset NETWORK
+epprd_rg:acquire_service_addr[best_boot_addr:107] shift
+epprd_rg:acquire_service_addr[best_boot_addr:108] candidate_boots=epprda
+epprd_rg:acquire_service_addr[best_boot_addr:108] typeset candidate_boots
+epprd_rg:acquire_service_addr[best_boot_addr:112] echo epprda
+epprd_rg:acquire_service_addr[best_boot_addr:112] wc -l
+epprd_rg:acquire_service_addr[best_boot_addr:112] tr ' ' '\n'
+epprd_rg:acquire_service_addr[best_boot_addr:112] num_candidates=' 1'
+epprd_rg:acquire_service_addr[best_boot_addr:112] typeset -li num_candidates
+epprd_rg:acquire_service_addr[best_boot_addr:113] (( 1 == 1 ))
+epprd_rg:acquire_service_addr[best_boot_addr:114] echo epprda
+epprd_rg:acquire_service_addr[best_boot_addr:115] return
+epprd_rg:acquire_service_addr[492] boot_addr=epprda
+epprd_rg:acquire_service_addr[493] (( 0 != 0 ))
+epprd_rg:acquire_service_addr[505] cut -f1
+epprd_rg:acquire_service_addr[505] clgetif -a epprda
+epprd_rg:acquire_service_addr[505] 2> /dev/null
+epprd_rg:acquire_service_addr[505] INTERFACE='en0 '
+epprd_rg:acquire_service_addr[507] cllsif -J '~' -Sn epprda
+epprd_rg:acquire_service_addr[507] cut -f7,9 -d~
+epprd_rg:acquire_service_addr[508] read boot_dot_addr INTERFACE
+epprd_rg:acquire_service_addr[508] IFS='~'
+epprd_rg:acquire_service_addr[510] [[ -z en0 ]]
+epprd_rg:acquire_service_addr[527] cllsif -J '~' -Sn epprd
+epprd_rg:acquire_service_addr[527] cut -f7,11,15 -d~
+epprd_rg:acquire_service_addr[527] uniq
+epprd_rg:acquire_service_addr[528] read service_dot_addr NETMASK INET_FAMILY
+epprd_rg:acquire_service_addr[528] IFS='~'
+epprd_rg:acquire_service_addr[530] [[ AF_INET == AF_INET6 ]]
+epprd_rg:acquire_service_addr[534] cl_swap_IP_address rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0
+epprd_rg:cl_swap_IP_address[462] version=1.9.14.8
+epprd_rg:cl_swap_IP_address[464] cl_get_path -S
+epprd_rg:cl_swap_IP_address[464] OP_SEP='~'
+epprd_rg:cl_swap_IP_address[465] LC_ALL=C
+epprd_rg:cl_swap_IP_address[465] export LC_ALL
+epprd_rg:cl_swap_IP_address[466] RESTORE_ROUTES=/usr/es/sbin/cluster/.restore_routes
+epprd_rg:cl_swap_IP_address[468] cl_echo 33 'Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0' /usr/es/sbin/cluster/events/utils/cl_swap_IP_address 'rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0'
Jan 28 2023 18:03:39Starting execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0+epprd_rg:cl_swap_IP_address[470] typeset -i oslevel
+epprd_rg:cl_swap_IP_address[471] /usr/bin/sed s/-//g
+epprd_rg:cl_swap_IP_address[471] /usr/bin/oslevel -r
+epprd_rg:cl_swap_IP_address[471] oslevel=720005
+epprd_rg:cl_swap_IP_address[476] [[ 6 == 6 ]]
+epprd_rg:cl_swap_IP_address[477] [[ 6 == 7 ]]
+epprd_rg:cl_swap_IP_address[484] no -a
+epprd_rg:cl_swap_IP_address[484] awk '{ print $3 }'
+epprd_rg:cl_swap_IP_address[484] grep ipignoreredirects
+epprd_rg:cl_swap_IP_address[484] PRIOR_IPIGNORE_REDIRECTS_VALUE=0
+epprd_rg:cl_swap_IP_address[485] /usr/sbin/no -o ipignoreredirects=1
Setting ipignoreredirects to 1
+epprd_rg:cl_swap_IP_address[490] PROC_RES=false
+epprd_rg:cl_swap_IP_address[491] [[ SERVICE_LABELS != 0 ]]
+epprd_rg:cl_swap_IP_address[491] [[ SERVICE_LABELS != GROUP ]]
+epprd_rg:cl_swap_IP_address[492] PROC_RES=true
+epprd_rg:cl_swap_IP_address[495] set -u
+epprd_rg:cl_swap_IP_address[497] RC=0
+epprd_rg:cl_swap_IP_address[504] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 fa.e6.13.4e.a9.20 183743545 0 60757861 0 0
en0 1500 61.81.244 61.81.244.134 183743545 0 60757861 0 0
lo0 16896 link#1 34271505 0 34271505 0 0
lo0 16896 127 127.0.0.1 34271505 0 34271505 0 0
lo0 16896 ::1%1 34271505 0 34271505 0 0
+epprd_rg:cl_swap_IP_address[505] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
default 61.81.244.1 UG 1 - en0 0 0
61.81.244.0 61.81.244.134 UHSb 1 - en0 0 0 =>
61.81.244/24 61.81.244.134 U 1 - en0 0 0
61.81.244.134 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.255 61.81.244.134 UHSb 1 - en0 0 0
127/8 127.0.0.1 U 1 - lo0 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+epprd_rg:cl_swap_IP_address[506] CASC_OR_ROT=rotating
+epprd_rg:cl_swap_IP_address[507] ACQ_OR_RLSE=acquire
+epprd_rg:cl_swap_IP_address[508] IF=en0
+epprd_rg:cl_swap_IP_address[509] ADDR=61.81.244.156
+epprd_rg:cl_swap_IP_address[510] OLD_ADDR=61.81.244.134
+epprd_rg:cl_swap_IP_address[511] NETMASK=255.255.255.0
+epprd_rg:cl_swap_IP_address[514] [[ rotating == cascading ]]
+epprd_rg:cl_swap_IP_address[525] cut -f3 -d~
+epprd_rg:cl_swap_IP_address[525] cllsif -J '~' -Sw -n 61.81.244.156
+epprd_rg:cl_swap_IP_address[525] NET=net_ether_01
+epprd_rg:cl_swap_IP_address[528] clodmget -qidentifier=61.81.244.156 -f max_aliases -n HACMPadapter
+epprd_rg:cl_swap_IP_address[528] ALIAS_FIRST=0
+epprd_rg:cl_swap_IP_address[529] grep -c -w inet
+epprd_rg:cl_swap_IP_address[529] ifconfig en0
+epprd_rg:cl_swap_IP_address[529] LC_ALL=C
+epprd_rg:cl_swap_IP_address[529] NUM_ADDRS=1
+epprd_rg:cl_swap_IP_address[530] [[ acquire == acquire ]]
+epprd_rg:cl_swap_IP_address[533] amlog_trace '' 'Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_swap_IP_address[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_swap_IP_address[amlog_trace:319] cltime
+epprd_rg:cl_swap_IP_address[amlog_trace:319] DATE=2023-01-28T18:03:39.464817
+epprd_rg:cl_swap_IP_address[amlog_trace:320] echo '|2023-01-28T18:03:39.464817|INFO: Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_swap_IP_address[535] cl_echo 7310 'cl_swap_IP_address: Configuring network interface en0 with aliased IP address 61.81.244.156' cl_swap_IP_address en0 61.81.244.156
Jan 28 2023 18:03:39cl_swap_IP_address: Configuring network interface en0 with aliased IP address 61.81.244.156+epprd_rg:cl_swap_IP_address[546] (( 1 > 1 ))
+epprd_rg:cl_swap_IP_address[550] clifconfig en0 alias 61.81.244.156 netmask 255.255.255.0 firstalias
+epprd_rg:clifconfig[117] version=1.9
+epprd_rg:clifconfig[121] set -A args en0 alias 61.81.244.156 netmask 255.255.255.0 firstalias
+epprd_rg:clifconfig[124] interface=en0
+epprd_rg:clifconfig[125] shift
+epprd_rg:clifconfig[127] [[ -n alias ]]
+epprd_rg:clifconfig[129] alias_val=1
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n 61.81.244.156 ]]
+epprd_rg:clifconfig[147] params=' address=61.81.244.156'
+epprd_rg:clifconfig[147] addr=61.81.244.156
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n netmask ]]
+epprd_rg:clifconfig[149] params=' address=61.81.244.156 netmask=255.255.255.0'
+epprd_rg:clifconfig[149] shift
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n firstalias ]]
+epprd_rg:clifconfig[167] shift
+epprd_rg:clifconfig[127] [[ -n '' ]]
+epprd_rg:clifconfig[174] [[ -n 1 ]]
+epprd_rg:clifconfig[174] [[ -n epprd_rg ]]
+epprd_rg:clifconfig[175] clwparname epprd_rg
+epprd_rg:clwparname[38] version=1.3.1.1
+epprd_rg:clwparname[44] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clwparname[44] [[ -z '' ]]
+epprd_rg:clwparname[44] exit 0
+epprd_rg:clifconfig[175] WPARNAME=''
+epprd_rg:clifconfig[176] (( 0 == 0 ))
+epprd_rg:clifconfig[176] [[ -n '' ]]
+epprd_rg:clifconfig[218] belongs_to_an_active_wpar 61.81.244.156
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] [[ -z '' ]]
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] return 1
+epprd_rg:clifconfig[218] read wpar_name wpar_if wpar_netmask wpar_broadcast
+epprd_rg:clifconfig[218] IFS='~'
+epprd_rg:clifconfig[219] rc=1
+epprd_rg:clifconfig[221] [[ 1 == 0 ]]
+epprd_rg:clifconfig[275] ifconfig en0 alias 61.81.244.156 netmask 255.255.255.0 firstalias
+epprd_rg:cl_swap_IP_address[584] hats_adapter_notify en0 -e 61.81.244.156 alias
2023-01-28T18:03:39.516659 hats_adapter_notify
2023-01-28T18:03:39.517619 hats_adapter_notify
+epprd_rg:cl_swap_IP_address[587] check_alias_status en0 61.81.244.156 acquire
+epprd_rg:cl_swap_IP_address[check_alias_status:108] CH_INTERFACE=en0
+epprd_rg:cl_swap_IP_address[check_alias_status:109] CH_ADDRESS=61.81.244.156
+epprd_rg:cl_swap_IP_address[check_alias_status:110] CH_ACQ_OR_RLSE=acquire
+epprd_rg:cl_swap_IP_address[check_alias_status:118] IF_IB=en0
+epprd_rg:cl_swap_IP_address[check_alias_status:120] awk '{print index($0, "ib")}'
+epprd_rg:cl_swap_IP_address[check_alias_status:120] echo en0
+epprd_rg:cl_swap_IP_address[check_alias_status:120] IS_IB=0
+epprd_rg:cl_swap_IP_address[check_alias_status:122] [[ 0 != 1 ]]
+epprd_rg:cl_swap_IP_address[check_alias_status:124] clifconfig en0
+epprd_rg:cl_swap_IP_address[check_alias_status:124] fgrep -w 61.81.244.156
+epprd_rg:cl_swap_IP_address[check_alias_status:124] awk '{print $2}'
+epprd_rg:clifconfig[117] version=1.9
+epprd_rg:clifconfig[121] set -A args en0
+epprd_rg:clifconfig[124] interface=en0
+epprd_rg:clifconfig[125] shift
+epprd_rg:clifconfig[127] [[ -n '' ]]
+epprd_rg:clifconfig[174] [[ -n '' ]]
+epprd_rg:clifconfig[218] belongs_to_an_active_wpar
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] clodmget '-qname = WPAR_NAME' -f group -n HACMPresource
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] [[ -z '' ]]
+epprd_rg:clifconfig[belongs_to_an_active_wpar:63] return 1
+epprd_rg:clifconfig[218] read wpar_name wpar_if wpar_netmask wpar_broadcast
+epprd_rg:clifconfig[218] IFS='~'
+epprd_rg:clifconfig[219] rc=1
+epprd_rg:clifconfig[221] [[ 1 == 0 ]]
+epprd_rg:clifconfig[275] ifconfig en0
+epprd_rg:cl_swap_IP_address[check_alias_status:124] ADDR=61.81.244.156
+epprd_rg:cl_swap_IP_address[check_alias_status:129] [ acquire = acquire ]
+epprd_rg:cl_swap_IP_address[check_alias_status:133] [[ 61.81.244.156 != 61.81.244.156 ]]
+epprd_rg:cl_swap_IP_address[check_alias_status:144] return 0
+epprd_rg:cl_swap_IP_address[588] RC=0
+epprd_rg:cl_swap_IP_address[590] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[594] amlog_trace '' 'Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_swap_IP_address[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_swap_IP_address[amlog_trace:319] cltime
+epprd_rg:cl_swap_IP_address[amlog_trace:319] DATE=2023-01-28T18:03:39.571690
+epprd_rg:cl_swap_IP_address[amlog_trace:320] echo '|2023-01-28T18:03:39.571690|INFO: Aliasing Service IP|61.81.244.156'
+epprd_rg:cl_swap_IP_address[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_swap_IP_address[701] [[ 0 != 0 ]]
+epprd_rg:cl_swap_IP_address[714] flush_arp
+epprd_rg:cl_swap_IP_address[flush_arp:49] arp -an
+epprd_rg:cl_swap_IP_address[flush_arp:49] grep '\?'
+epprd_rg:cl_swap_IP_address[flush_arp:49] tr -d '()'
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.27
61.81.244.27 (61.81.244.27) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.220
61.81.244.220 (61.81.244.220) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.221
61.81.244.221 (61.81.244.221) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.224
61.81.244.224 (61.81.244.224) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.239
61.81.244.239 (61.81.244.239) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.123
61.81.244.123 (61.81.244.123) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.146
61.81.244.146 (61.81.244.146) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.1
61.81.244.1 (61.81.244.1) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:50] arp -d 61.81.244.156
61.81.244.156 (61.81.244.156) deleted
+epprd_rg:cl_swap_IP_address[flush_arp:49] read host addr other
+epprd_rg:cl_swap_IP_address[flush_arp:52] return 0
+epprd_rg:cl_swap_IP_address[716] netstat -in
Name Mtu Network Address Ipkts Ierrs Opkts Oerrs Coll
en0 1500 link#2 fa.e6.13.4e.a9.20 183743638 0 60757986 0 0
en0 1500 61.81.244 61.81.244.156 183743638 0 60757986 0 0
en0 1500 61.81.244 61.81.244.134 183743638 0 60757986 0 0
lo0 16896 link#1 34271517 0 34271517 0 0
lo0 16896 127 127.0.0.1 34271517 0 34271517 0 0
lo0 16896 ::1%1 34271517 0 34271517 0 0
+epprd_rg:cl_swap_IP_address[717] netstat -rnC
Routing tables
Destination Gateway Flags Wt Policy If Cost Config_Cost
Route tree for Protocol Family 2 (Internet):
default 61.81.244.1 UG 1 - en0 0 0
61.81.244.0 61.81.244.156 UHSb 1 - en0 0 0 =>
61.81.244/24 61.81.244.156 U 1 - en0 0 0
61.81.244.134 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.156 127.0.0.1 UGHS 1 - lo0 0 0
61.81.244.255 61.81.244.156 UHSb 1 - en0 0 0
127/8 127.0.0.1 U 1 - lo0 0 0
Route tree for Protocol Family 24 (Internet v6):
::1%1 ::1%1 UH 1 - lo0 0 0
+epprd_rg:cl_swap_IP_address[989] no -o ipignoreredirects=0
Setting ipignoreredirects to 0
+epprd_rg:cl_swap_IP_address[992] cl_echo 32 'Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0. Exit status = 0' /usr/es/sbin/cluster/events/utils/cl_swap_IP_address 'rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0' 0
Jan 28 2023 18:03:39Completed execution of /usr/es/sbin/cluster/events/utils/cl_swap_IP_address with parameters rotating acquire en0 61.81.244.156 61.81.244.134 255.255.255.0. Exit status = 0+epprd_rg:cl_swap_IP_address[994] date
Sat Jan 28 18:03:39 KORST 2023
+epprd_rg:cl_swap_IP_address[996] exit 0
+epprd_rg:acquire_service_addr[537] RC=0
+epprd_rg:acquire_service_addr[539] (( 0 != 0 ))
+epprd_rg:acquire_service_addr[549] [[ true == false ]]
+epprd_rg:acquire_service_addr[560] cl_RMupdate resource_up All_nonerror_service_addrs acquire_service_addr
2023-01-28T18:03:39.649697
2023-01-28T18:03:39.654111
+epprd_rg:acquire_service_addr[565] [[ UNDEFINED != UNDEFINED ]]
+epprd_rg:acquire_service_addr[568] NSORDER=''
+epprd_rg:acquire_service_addr[568] export NSORDER
+epprd_rg:acquire_service_addr[571] [[ true == false ]]
+epprd_rg:acquire_service_addr[579] exit 0
Jan 28 2023 18:03:39 EVENT COMPLETED: acquire_service_addr 0
|2023-01-28T18:03:39|8561|EVENT COMPLETED: acquire_service_addr 0|
+epprd_rg:process_resources[acquire_service_labels:3087] RC=0
+epprd_rg:process_resources[acquire_service_labels:3089] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[acquire_service_labels:3104] (( 0 != 0 ))
+epprd_rg:process_resources[acquire_service_labels:3110] refresh -s clcomd
0513-095 The request for subsystem refresh was completed successfully.
+epprd_rg:process_resources[acquire_service_labels:3112] return 0
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:03:39.729540 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=DISKS ACTION=ACQUIRE HDISKS='"hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8"' RESOURCE_GROUPS='"epprd_rg' '"' VOLUME_GROUPS='"datavg,datavg,datavg,datavg,datavg,datavg,datavg"'
+epprd_rg:process_resources[1] JOB_TYPE=DISKS
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] HDISKS=hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] VOLUME_GROUPS=datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ DISKS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ DISKS == ONLINE ]]
+epprd_rg:process_resources[3439] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[3441] FAILED_RR_RGS=''
+epprd_rg:process_resources[3442] get_disks_main
+epprd_rg:process_resources[get_disks_main:981] PS4_FUNC=get_disks_main
+epprd_rg:process_resources[get_disks_main:981] typeset PS4_FUNC
+epprd_rg:process_resources[get_disks_main:982] [[ high == high ]]
+epprd_rg:process_resources[get_disks_main:982] set -x
+epprd_rg:process_resources[get_disks_main:983] SKIPBRKRES=0
+epprd_rg:process_resources[get_disks_main:983] typeset -li SKIPBRKRES
+epprd_rg:process_resources[get_disks_main:984] STAT=0
+epprd_rg:process_resources[get_disks_main:985] FAILURE_IN_METHOD=0
+epprd_rg:process_resources[get_disks_main:985] typeset -li FAILURE_IN_METHOD
+epprd_rg:process_resources[get_disks_main:986] LIST_OF_FAILED_RGS=''
+epprd_rg:process_resources[get_disks_main:989] : Below are the list of resources as generated by clrgpa
+epprd_rg:process_resources[get_disks_main:991] RG_LIST=epprd_rg
+epprd_rg:process_resources[get_disks_main:992] RDISK_LIST=''
+epprd_rg:process_resources[get_disks_main:993] DISK_LIST=hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8
+epprd_rg:process_resources[get_disks_main:994] VG_LIST=datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[get_disks_main:997] : Resource groups are processed individually. This is required because
+epprd_rg:process_resources[get_disks_main:998] : the replication mechanism may differ between resource groups.
+epprd_rg:process_resources[get_disks_main:1002] getReplicatedResources epprd_rg
+epprd_rg:process_resources[getReplicatedResources:699] PS4_FUNC=getReplicatedResources
+epprd_rg:process_resources[getReplicatedResources:699] typeset PS4_FUNC
+epprd_rg:process_resources[getReplicatedResources:700] [[ high == high ]]
+epprd_rg:process_resources[getReplicatedResources:700] set -x
+epprd_rg:process_resources[getReplicatedResources:702] RV=false
+epprd_rg:process_resources[getReplicatedResources:704] clodmget -n -f type HACMPrresmethods
+epprd_rg:process_resources[getReplicatedResources:704] [[ -n 9 ]]
+epprd_rg:process_resources[getReplicatedResources:707] : Replicated resource methods are defined, check for resources
+epprd_rg:process_resources[getReplicatedResources:709] clodmget -q $'name like \'*_REP_RESOURCE\' AND group=epprd_rg' -f value -n HACMPresource
+epprd_rg:process_resources[getReplicatedResources:709] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:718] : Verify if any backup profiles are configured and trigger cbm utilities based on that
+epprd_rg:process_resources[getReplicatedResources:720] clodmget -q name=BACKUP_ENABLED -f value HACMPresource
+epprd_rg:process_resources[getReplicatedResources:720] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:739] echo false
+epprd_rg:process_resources[get_disks_main:1002] REPLICATED_RESOURCES=false
+epprd_rg:process_resources[get_disks_main:1005] : Break out the resources for resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1007] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[get_disks_main:1008] VOLUME_GROUPS=''
+epprd_rg:process_resources[get_disks_main:1009] HDISKS=''
+epprd_rg:process_resources[get_disks_main:1010] RHDISKS=''
+epprd_rg:process_resources[get_disks_main:1011] RDISK_LIST=''
+epprd_rg:process_resources[get_disks_main:1014] : Get the volume groups in resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1016] print datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[get_disks_main:1016] read VOLUME_GROUPS VG_LIST
+epprd_rg:process_resources[get_disks_main:1016] IFS=:
+epprd_rg:process_resources[get_disks_main:1018] : Removing duplicate entries in VG list.
+epprd_rg:process_resources[get_disks_main:1020] echo datavg,datavg,datavg,datavg,datavg,datavg,datavg
+epprd_rg:process_resources[get_disks_main:1020] tr , '\n'
+epprd_rg:process_resources[get_disks_main:1020] xargs
+epprd_rg:process_resources[get_disks_main:1020] sort -u
+epprd_rg:process_resources[get_disks_main:1020] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[get_disks_main:1022] : Get the disks corresponding to these volume groups
+epprd_rg:process_resources[get_disks_main:1024] print hdisk2,hdisk3,hdisk4,hdisk5,hdisk6,hdisk7,hdisk8
+epprd_rg:process_resources[get_disks_main:1024] read HDISKS DISK_LIST
+epprd_rg:process_resources[get_disks_main:1024] IFS=:
+epprd_rg:process_resources[get_disks_main:1025] HDISKS='hdisk2 hdisk3 hdisk4 hdisk5 hdisk6 hdisk7 hdisk8'
+epprd_rg:process_resources[get_disks_main:1031] : Pick up any raw disks not returned by clrgpa
+epprd_rg:process_resources[get_disks_main:1033] clodmget -q group='epprd_rg AND name=RAW_DISK' HACMPresource
+epprd_rg:process_resources[get_disks_main:1033] [[ -n '' ]]
+epprd_rg:process_resources[get_disks_main:1042] : Get any raw disks in resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1045] print
+epprd_rg:process_resources[get_disks_main:1045] read RHDISKS RDISK_LIST
+epprd_rg:process_resources[get_disks_main:1045] IFS=:
+epprd_rg:process_resources[get_disks_main:1046] RHDISKS=''
+epprd_rg:process_resources[get_disks_main:1047] print datavg
+epprd_rg:process_resources[get_disks_main:1047] read VOLUME_GROUPS
+epprd_rg:process_resources[get_disks_main:1051] : At this point, the global variables below should be set to
+epprd_rg:process_resources[get_disks_main:1052] : the values associated with resource group epprd_rg
+epprd_rg:process_resources[get_disks_main:1054] export RESOURCE_GROUPS
+epprd_rg:process_resources[get_disks_main:1055] export VOLUME_GROUPS
+epprd_rg:process_resources[get_disks_main:1056] export HDISKS
+epprd_rg:process_resources[get_disks_main:1057] export RHDISKS
+epprd_rg:process_resources[get_disks_main:1059] [[ false == true ]]
+epprd_rg:process_resources[get_disks_main:1182] get_disks
+epprd_rg:process_resources[get_disks:1198] PS4_FUNC=get_disks
+epprd_rg:process_resources[get_disks:1198] typeset PS4_FUNC
+epprd_rg:process_resources[get_disks:1199] [[ high == high ]]
+epprd_rg:process_resources[get_disks:1199] set -x
+epprd_rg:process_resources[get_disks:1201] STAT=0
+epprd_rg:process_resources[get_disks:1204] : Most volume groups are Enhanced Concurrent Mode, and it should
+epprd_rg:process_resources[get_disks:1205] : not be necessary to break reserves. If all the volume groups
+epprd_rg:process_resources[get_disks:1206] : are ECM, we should be able to skip breaking reserves. If it
+epprd_rg:process_resources[get_disks:1207] : turns out that there is a reserve on a disk in an ECM volume
+epprd_rg:process_resources[get_disks:1208] : group, that will be handled by cl_pvo making an explicit call
+epprd_rg:process_resources[get_disks:1209] : to cl_disk_available.
+epprd_rg:process_resources[get_disks:1213] all_ecm=TRUE
+epprd_rg:process_resources[get_disks:1214] IFS=:
+epprd_rg:process_resources[get_disks:1214] set -- datavg
+epprd_rg:process_resources[get_disks:1214] print datavg
+epprd_rg:process_resources[get_disks:1216] print datavg
+epprd_rg:process_resources[get_disks:1216] sort -u
+epprd_rg:process_resources[get_disks:1216] tr , '\n'
+epprd_rg:process_resources[get_disks:1218] clodmget -q 'name = datavg and attribute = conc_capable' -f value -n CuAt
+epprd_rg:process_resources[get_disks:1218] [[ y != y ]]
+epprd_rg:process_resources[get_disks:1224] [[ TRUE == FALSE ]]
+epprd_rg:process_resources[get_disks:1226] [[ TRUE == TRUE ]]
+epprd_rg:process_resources[get_disks:1226] return 0
+epprd_rg:process_resources[get_disks_main:1183] STAT=0
+epprd_rg:process_resources[get_disks_main:1186] return 0
+epprd_rg:process_resources[3443] tr ' ' '\n'
+epprd_rg:process_resources[3443] echo
+epprd_rg:process_resources[3443] FAILED_RR_RGS=''
+epprd_rg:process_resources[3444] [[ -n '' ]]
+epprd_rg:process_resources[3450] clodmget -n -q policy=scsi -f value HACMPsplitmerge
+epprd_rg:process_resources[3450] SCSIPR_ENABLED=''
+epprd_rg:process_resources[3450] typeset SCSIPR_ENABLED
+epprd_rg:process_resources[3451] [[ '' == Yes ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:03:39.805587 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=VGS ACTION=ACQUIRE CONCURRENT_VOLUME_GROUP='""' VOLUME_GROUPS='"datavg"' RESOURCE_GROUPS='"epprd_rg' '"' EXPORT_FILESYSTEM='""'
+epprd_rg:process_resources[1] JOB_TYPE=VGS
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] CONCURRENT_VOLUME_GROUP=''
+epprd_rg:process_resources[1] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[1] EXPORT_FILESYSTEM=''
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ VGS == RELEASE ]]
+epprd_rg:process_resources[3360] [[ VGS == ONLINE ]]
+epprd_rg:process_resources[3571] process_volume_groups_main ACQUIRE
+epprd_rg:process_resources[process_volume_groups_main:2293] PS4_FUNC=process_volume_groups_main
+epprd_rg:process_resources[process_volume_groups_main:2293] typeset PS4_FUNC
+epprd_rg:process_resources[process_volume_groups_main:2294] [[ high == high ]]
+epprd_rg:process_resources[process_volume_groups_main:2294] set -x
+epprd_rg:process_resources[process_volume_groups_main:2295] DEF_VARYON_ACTION=0
+epprd_rg:process_resources[process_volume_groups_main:2295] typeset -li DEF_VARYON_ACTION
+epprd_rg:process_resources[process_volume_groups_main:2296] FAILURE_IN_METHOD=0
+epprd_rg:process_resources[process_volume_groups_main:2296] typeset -li FAILURE_IN_METHOD
+epprd_rg:process_resources[process_volume_groups_main:2297] ACTION=ACQUIRE
+epprd_rg:process_resources[process_volume_groups_main:2297] typeset ACTION
+epprd_rg:process_resources[process_volume_groups_main:2298] STAT=0
+epprd_rg:process_resources[process_volume_groups_main:2299] VG_LIST=datavg
+epprd_rg:process_resources[process_volume_groups_main:2300] RG_LIST=epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2304] getReplicatedResources epprd_rg
+epprd_rg:process_resources[getReplicatedResources:699] PS4_FUNC=getReplicatedResources
+epprd_rg:process_resources[getReplicatedResources:699] typeset PS4_FUNC
+epprd_rg:process_resources[getReplicatedResources:700] [[ high == high ]]
+epprd_rg:process_resources[getReplicatedResources:700] set -x
+epprd_rg:process_resources[getReplicatedResources:702] RV=false
+epprd_rg:process_resources[getReplicatedResources:704] clodmget -n -f type HACMPrresmethods
+epprd_rg:process_resources[getReplicatedResources:704] [[ -n 9 ]]
+epprd_rg:process_resources[getReplicatedResources:707] : Replicated resource methods are defined, check for resources
+epprd_rg:process_resources[getReplicatedResources:709] clodmget -q $'name like \'*_REP_RESOURCE\' AND group=epprd_rg' -f value -n HACMPresource
+epprd_rg:process_resources[getReplicatedResources:709] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:718] : Verify if any backup profiles are configured and trigger cbm utilities based on that
+epprd_rg:process_resources[getReplicatedResources:720] clodmget -q name=BACKUP_ENABLED -f value HACMPresource
+epprd_rg:process_resources[getReplicatedResources:720] [[ -n '' ]]
+epprd_rg:process_resources[getReplicatedResources:739] echo false
+epprd_rg:process_resources[process_volume_groups_main:2304] REPLICATED_RESOURCES=false
+epprd_rg:process_resources[process_volume_groups_main:2305] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2306] print -- datavg
+epprd_rg:process_resources[process_volume_groups_main:2306] read VOLUME_GROUPS VG_LIST
+epprd_rg:process_resources[process_volume_groups_main:2306] IFS=:
+epprd_rg:process_resources[process_volume_groups_main:2307] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[process_volume_groups_main:2310] : At this point, these variables contain information only for epprd_rg
+epprd_rg:process_resources[process_volume_groups_main:2312] export VOLUME_GROUPS
+epprd_rg:process_resources[process_volume_groups_main:2313] export RESOURCE_GROUPS
+epprd_rg:process_resources[process_volume_groups_main:2315] [[ false == true ]]
+epprd_rg:process_resources[process_volume_groups_main:2555] process_volume_groups ACQUIRE
+epprd_rg:process_resources[process_volume_groups:2571] PS4_FUNC=process_volume_groups
+epprd_rg:process_resources[process_volume_groups:2571] typeset PS4_FUNC
+epprd_rg:process_resources[process_volume_groups:2572] [[ high == high ]]
+epprd_rg:process_resources[process_volume_groups:2572] set -x
+epprd_rg:process_resources[process_volume_groups:2573] STAT=0
+epprd_rg:process_resources[process_volume_groups:2575] GROUPNAME=epprd_rg
+epprd_rg:process_resources[process_volume_groups:2575] export GROUPNAME
+epprd_rg:process_resources[process_volume_groups:2578] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[process_volume_groups:2581] : Varyon the VGs in the environment
+epprd_rg:process_resources[process_volume_groups:2583] cl_activate_vgs -n
+epprd_rg:cl_activate_vgs[213] [[ high == high ]]
+epprd_rg:cl_activate_vgs[213] version=1.46
+epprd_rg:cl_activate_vgs[215] STATUS=0
+epprd_rg:cl_activate_vgs[215] typeset -li STATUS
+epprd_rg:cl_activate_vgs[216] SYNCFLAG=''
+epprd_rg:cl_activate_vgs[217] CLENV=''
+epprd_rg:cl_activate_vgs[218] TMP_FILENAME=/tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[219] USE_OEM_METHODS=false
+epprd_rg:cl_activate_vgs[221] PROC_RES=false
+epprd_rg:cl_activate_vgs[225] [[ VGS != 0 ]]
+epprd_rg:cl_activate_vgs[225] [[ VGS != GROUP ]]
+epprd_rg:cl_activate_vgs[226] PROC_RES=true
+epprd_rg:cl_activate_vgs[232] [[ -n == -n ]]
+epprd_rg:cl_activate_vgs[234] SYNCFLAG=-n
+epprd_rg:cl_activate_vgs[235] shift
+epprd_rg:cl_activate_vgs[240] (( 0 != 0 ))
+epprd_rg:cl_activate_vgs[247] set -u
+epprd_rg:cl_activate_vgs[250] rm -f /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[254] lsvg -L -o
+epprd_rg:cl_activate_vgs[254] print caavg_private rootvg
+epprd_rg:cl_activate_vgs[254] VGSTATUS='caavg_private rootvg'
+epprd_rg:cl_activate_vgs[257] ALLVGS=All_volume_groups
+epprd_rg:cl_activate_vgs[258] cl_RMupdate resource_acquiring All_volume_groups cl_activate_vgs
2023-01-28T18:03:39.877790
2023-01-28T18:03:39.882284
+epprd_rg:cl_activate_vgs[262] [[ true == false ]]
+epprd_rg:cl_activate_vgs[285] LIST_OF_VOLUME_GROUPS_FOR_RG=''
+epprd_rg:cl_activate_vgs[289] export GROUPNAME
+epprd_rg:cl_activate_vgs[291] echo datavg
+epprd_rg:cl_activate_vgs[291] read LIST_OF_VOLUME_GROUPS_FOR_RG VOLUME_GROUPS
+epprd_rg:cl_activate_vgs[291] IFS=:
+epprd_rg:cl_activate_vgs[294] echo datavg
+epprd_rg:cl_activate_vgs[296] sort -u
+epprd_rg:cl_activate_vgs[295] tr , '\n'
+epprd_rg:cl_activate_vgs[294] LIST_OF_VOLUME_GROUPS_FOR_RG=datavg
+epprd_rg:cl_activate_vgs[298] vgs_list datavg
+epprd_rg:cl_activate_vgs[vgs_list:178] PS4_LOOP=''
+epprd_rg:cl_activate_vgs[vgs_list:178] typeset PS4_LOOP
+epprd_rg:cl_activate_vgs:datavg[vgs_list:182] PS4_LOOP=datavg
+epprd_rg:cl_activate_vgs:datavg[vgs_list:186] [[ 'caavg_private rootvg' == @(?(*\ )datavg?(\ *)) ]]
+epprd_rg:cl_activate_vgs:datavg[vgs_list:192] : call varyon for the volume group in Foreground
+epprd_rg:cl_activate_vgs:datavg[vgs_list:194] vgs_chk datavg -n cl_activate_vgs
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:78] VG=datavg
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:78] typeset VG
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:79] SYNCFLAG=-n
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:79] typeset SYNCFLAG
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:80] PROGNAME=cl_activate_vgs
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:80] typeset PROGNAME
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:81] STATUS=0
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:81] typeset -li STATUS
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:83] [[ -n '' ]]
+epprd_rg:cl_activate_vgs(0.052):datavg[vgs_chk:100] amlog_trace '' 'Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(0.052):datavg[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_vgs(0.053):datavg[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_vgs(0.077):datavg[amlog_trace:319] cltime
+epprd_rg:cl_activate_vgs(0.080):datavg[amlog_trace:319] DATE=2023-01-28T18:03:39.919068
+epprd_rg:cl_activate_vgs(0.080):datavg[amlog_trace:320] echo '|2023-01-28T18:03:39.919068|INFO: Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(0.080):datavg[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_vgs(0.080):datavg[vgs_chk:102] typeset -x ERRMSG
+epprd_rg:cl_activate_vgs(0.080):datavg[vgs_chk:103] clvaryonvg -n datavg
+epprd_rg:clvaryonvg(0.009):datavg[985] version=1.21.7.22
+epprd_rg:clvaryonvg(0.009):datavg[989] : Without this test, cause of failure due to non-root may not be obvious
+epprd_rg:clvaryonvg(0.009):datavg[991] [[ -z '' ]]
+epprd_rg:clvaryonvg(0.009):datavg[991] id -nu
+epprd_rg:clvaryonvg(0.010):datavg[991] 2> /dev/null
+epprd_rg:clvaryonvg(0.012):datavg[991] user_name=root
+epprd_rg:clvaryonvg(0.012):datavg[994] : Check if RBAC is enabled
+epprd_rg:clvaryonvg(0.012):datavg[996] is_rbac_enabled=''
+epprd_rg:clvaryonvg(0.012):datavg[996] typeset is_rbac_enabled
+epprd_rg:clvaryonvg(0.012):datavg[997] clodmget -nq group='LDAPClient and name=RBACConfig' -f value HACMPLDAP
+epprd_rg:clvaryonvg(0.013):datavg[997] 2> /dev/null
+epprd_rg:clvaryonvg(0.016):datavg[997] is_rbac_enabled=''
+epprd_rg:clvaryonvg(0.016):datavg[999] role=''
+epprd_rg:clvaryonvg(0.016):datavg[999] typeset role
+epprd_rg:clvaryonvg(0.016):datavg[1000] [[ root != root ]]
+epprd_rg:clvaryonvg(0.016):datavg[1009] LEAVEOFF=FALSE
+epprd_rg:clvaryonvg(0.016):datavg[1010] FORCEON=''
+epprd_rg:clvaryonvg(0.016):datavg[1011] FORCEUPD=FALSE
+epprd_rg:clvaryonvg(0.016):datavg[1012] NOQUORUM=20
+epprd_rg:clvaryonvg(0.016):datavg[1013] MISSING_UPDATES=30
+epprd_rg:clvaryonvg(0.016):datavg[1014] DATA_DIVERGENCE=31
+epprd_rg:clvaryonvg(0.016):datavg[1015] ARGS=''
+epprd_rg:clvaryonvg(0.016):datavg[1016] typeset -li varyonvg_rc
+epprd_rg:clvaryonvg(0.016):datavg[1017] typeset -li MAXLVS
+epprd_rg:clvaryonvg(0.016):datavg[1018] ENODEV=19
+epprd_rg:clvaryonvg(0.016):datavg[1018] typeset -li ENODEV
+epprd_rg:clvaryonvg(0.016):datavg[1020] set -u
+epprd_rg:clvaryonvg(0.016):datavg[1022] /bin/dspmsg -s 2 cspoc.cat 31 'usage: clvaryonvg [-F] [-f] [-n] [-p] [-s] [-o] \n'
+epprd_rg:clvaryonvg(0.019):datavg[1022] USAGE='usage: clvaryonvg [-F] [-f] [-n] [-p] [-s] [-o] '
+epprd_rg:clvaryonvg(0.019):datavg[1023] (( 2 < 1 ))
+epprd_rg:clvaryonvg(0.019):datavg[1029] : Parse the options
+epprd_rg:clvaryonvg(0.019):datavg[1031] S_FLAG=''
+epprd_rg:clvaryonvg(0.019):datavg[1032] P_FLAG=''
+epprd_rg:clvaryonvg(0.019):datavg[1033] getopts :Ffnops option
+epprd_rg:clvaryonvg(0.019):datavg[1038] : -n Always applied, retained for compatibility
+epprd_rg:clvaryonvg(0.019):datavg[1033] getopts :Ffnops option
+epprd_rg:clvaryonvg(0.019):datavg[1048] : Pick up the volume group name, which follows the options
+epprd_rg:clvaryonvg(0.019):datavg[1050] shift 1
+epprd_rg:clvaryonvg(0.019):datavg[1051] VG=datavg
+epprd_rg:clvaryonvg(0.019):datavg[1054] : Set up filenames we will be using
+epprd_rg:clvaryonvg(0.019):datavg[1056] VGDIR=/usr/es/sbin/cluster/etc/vg/
+epprd_rg:clvaryonvg(0.019):datavg[1057] TSFILE=/usr/es/sbin/cluster/etc/vg/datavg.tstamp
+epprd_rg:clvaryonvg(0.019):datavg[1058] DSFILE=/usr/es/sbin/cluster/etc/vg/datavg.desc
+epprd_rg:clvaryonvg(0.019):datavg[1059] RPFILE=/usr/es/sbin/cluster/etc/vg/datavg.replay
+epprd_rg:clvaryonvg(0.019):datavg[1060] permset=/usr/es/sbin/cluster/etc/vg/datavg.perms
+epprd_rg:clvaryonvg(0.019):datavg[1061] failfile=/usr/es/sbin/cluster/etc/vg/datavg.fail
+epprd_rg:clvaryonvg(0.019):datavg[1065] : Get some LVM information we are going to need in processing this
+epprd_rg:clvaryonvg(0.019):datavg[1066] : volume group:
+epprd_rg:clvaryonvg(0.019):datavg[1067] : - volume group identifier - vgid
+epprd_rg:clvaryonvg(0.019):datavg[1068] : - list of disks
+epprd_rg:clvaryonvg(0.019):datavg[1069] : - quorum indicator
+epprd_rg:clvaryonvg(0.019):datavg[1070] : - timestamp if present
+epprd_rg:clvaryonvg(0.019):datavg[1072] /usr/sbin/getlvodm -v datavg
+epprd_rg:clvaryonvg(0.022):datavg[1072] VGID=00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.024):datavg[1073] cut '-d ' -f2
+epprd_rg:clvaryonvg(0.023):datavg[1073] /usr/sbin/getlvodm -w 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.027):datavg[1073] pvlst=$'hdisk2\nhdisk3\nhdisk4\nhdisk5\nhdisk6\nhdisk7\nhdisk8'
+epprd_rg:clvaryonvg(0.027):datavg[1074] /usr/sbin/getlvodm -Q datavg
+epprd_rg:clvaryonvg(0.030):datavg[1074] quorum=y
+epprd_rg:clvaryonvg(0.030):datavg[1075] TS_FROM_DISK=''
+epprd_rg:clvaryonvg(0.030):datavg[1076] TS_FROM_ODM=''
+epprd_rg:clvaryonvg(0.030):datavg[1077] GOOD_PV=''
+epprd_rg:clvaryonvg(0.030):datavg[1078] O_flag=''
+epprd_rg:clvaryonvg(0.030):datavg[1079] A_flag=''
+epprd_rg:clvaryonvg(0.030):datavg[1080] mode_flag=''
+epprd_rg:clvaryonvg(0.030):datavg[1081] vg_on_mode=''
+epprd_rg:clvaryonvg(0.030):datavg[1082] vg_set_passive=FALSE
+epprd_rg:clvaryonvg(0.030):datavg[1084] odmget -q 'attribute = varyon_state' PdAt
+epprd_rg:clvaryonvg(0.033):datavg[1084] [[ -n $'\nPdAt:\n\tuniquetype = "logical_volume/vgsubclass/vgtype"\n\tattribute = "varyon_state"\n\tdeflt = "0"\n\tvalues = "0,1,2,3"\n\twidth = ""\n\ttype = "R"\n\tgeneric = ""\n\trep = "l"\n\tnls_index = 0' ]]
+epprd_rg:clvaryonvg(0.033):datavg[1087] : LVM may record that a volume group was varied on from an earlier
+epprd_rg:clvaryonvg(0.033):datavg[1088] : IPL. Rely on HA state tracking, and override the LVM check
+epprd_rg:clvaryonvg(0.033):datavg[1090] O_flag=-O
+epprd_rg:clvaryonvg(0.033):datavg[1093] : Checking if SCSI PR is enabled and it is so,
+epprd_rg:clvaryonvg(0.033):datavg[1094] : confirming if the SCSI PR reservations are intact.
+epprd_rg:clvaryonvg(0.034):datavg[1096] lssrc -ls clstrmgrES
+epprd_rg:clvaryonvg(0.035):datavg[1096] 2>& 1
+epprd_rg:clvaryonvg(0.035):datavg[1096] egrep -q -v 'ST_INIT|NOT_CONFIGURED'
+epprd_rg:clvaryonvg(0.035):datavg[1096] grep 'Current state:'
+epprd_rg:clvaryonvg(0.050):datavg[1098] clodmget -n -q policy=scsi -f value HACMPsplitmerge
+epprd_rg:clvaryonvg(0.053):datavg[1098] SCSIPR_ENABLED=''
+epprd_rg:clvaryonvg(0.053):datavg[1098] typeset SCSIPR_ENABLED
+epprd_rg:clvaryonvg(0.053):datavg[1099] clodmget -q $'name like \'*VOLUME_GROUP\' and value = datavg' -f group -n HACMPresource
+epprd_rg:clvaryonvg(0.056):datavg[1099] resgrp=epprd_rg
+epprd_rg:clvaryonvg(0.056):datavg[1099] typeset resgrp
+epprd_rg:clvaryonvg(0.056):datavg[1100] [[ '' == Yes ]]
+epprd_rg:clvaryonvg(0.056):datavg[1134] : Operations such as varying on the volume group are likely to
+epprd_rg:clvaryonvg(0.056):datavg[1135] : require read/write access. So, set any volume group fencing appropriately.
+epprd_rg:clvaryonvg(0.056):datavg[1137] cl_set_vg_fence_height -c datavg rw
+epprd_rg:clvaryonvg(0.060):datavg[1138] RC=0
+epprd_rg:clvaryonvg(0.060):datavg[1139] (( 19 == 0 ))
+epprd_rg:clvaryonvg(0.060):datavg[1147] : Return code from volume group fencing for datavg is 0
+epprd_rg:clvaryonvg(0.060):datavg[1148] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.060):datavg[1160] : Check on the current state of the volume group
+epprd_rg:clvaryonvg(0.061):datavg[1182] grep -x -q datavg
+epprd_rg:clvaryonvg(0.061):datavg[1182] lsvg -L
+epprd_rg:clvaryonvg(0.065):datavg[1184] : The volume group is known - check to see if its already varyd on.
+epprd_rg:clvaryonvg(0.066):datavg[1186] grep -x -q datavg
+epprd_rg:clvaryonvg(0.066):datavg[1186] lsvg -L -o
+epprd_rg:clvaryonvg(0.069):datavg[1190] lsvg -L datavg
+epprd_rg:clvaryonvg(0.070):datavg[1190] 2> /dev/null
+epprd_rg:clvaryonvg(0.069):datavg[1190] grep -q -i -w passive-only
+epprd_rg:clvaryonvg(0.112):datavg[1191] vg_on_mode=passive
+epprd_rg:clvaryonvg(0.114):datavg[1194] grep -iw removed
+epprd_rg:clvaryonvg(0.114):datavg[1194] lsvg -p datavg
+epprd_rg:clvaryonvg(0.114):datavg[1194] 2> /dev/null
+epprd_rg:clvaryonvg(0.134):datavg[1194] removed_disks=''
+epprd_rg:clvaryonvg(0.134):datavg[1195] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.134):datavg[1213] [[ -n passive ]]
+epprd_rg:clvaryonvg(0.134):datavg[1215] lqueryvg -g 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.135):datavg[1215] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.154):datavg[1321] :
+epprd_rg:clvaryonvg(0.154):datavg[1322] : First, sniff at the disk to see if the local ODM information
+epprd_rg:clvaryonvg(0.154):datavg[1323] : matches what is on the disk.
+epprd_rg:clvaryonvg(0.154):datavg[1324] :
+epprd_rg:clvaryonvg(0.154):datavg[1326] vgdatimestamps
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:201] PS4_FUNC=vgdatimestamps
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:201] typeset PS4_FUNC
+epprd_rg:clvaryonvg(0.154):datavg[vgdatimestamps:202] [[ high == high ]]
+epprd_rg:clvaryonvg(0.155):datavg[vgdatimestamps:202] set -x
+epprd_rg:clvaryonvg(0.155):datavg[vgdatimestamps:203] set -u
+epprd_rg:clvaryonvg(0.155):datavg[vgdatimestamps:206] : See what timestamp LVM has recorded from the last time it checked
+epprd_rg:clvaryonvg(0.155):datavg[vgdatimestamps:207] : the disks
+epprd_rg:clvaryonvg(0.155):datavg[vgdatimestamps:209] /usr/sbin/getlvodm -T 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.155):datavg[vgdatimestamps:209] 2> /dev/null
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:209] TS_FROM_ODM=63d4e41f29287594
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:212] : Check to see if HACMP is maintaining a timestamp for this volume group
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:213] : Needed for some older volume groups
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:215] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.tstamp ]]
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:234] : Get the time stamp from the actual disk
+epprd_rg:clvaryonvg(0.158):datavg[vgdatimestamps:236] clvgdats /dev/datavg
+epprd_rg:clvaryonvg(0.159):datavg[vgdatimestamps:236] 2> /dev/null
+epprd_rg:clvaryonvg(0.168):datavg[vgdatimestamps:236] TS_FROM_DISK=63d4e41f29287594
+epprd_rg:clvaryonvg(0.169):datavg[vgdatimestamps:237] clvgdats_rc=0
+epprd_rg:clvaryonvg(0.169):datavg[vgdatimestamps:238] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.169):datavg[vgdatimestamps:247] [[ -z 63d4e41f29287594 ]]
+epprd_rg:clvaryonvg(0.169):datavg[1328] [[ 63d4e41f29287594 != 63d4e41f29287594 ]]
+epprd_rg:clvaryonvg(0.169):datavg[1344] : There is a chance that a VG that should be in passive mode is not.
+epprd_rg:clvaryonvg(0.169):datavg[1345] : Run cl_pvo to put it in passive mode if possible.
+epprd_rg:clvaryonvg(0.169):datavg[1350] [[ -z passive ]]
+epprd_rg:clvaryonvg(0.169):datavg[1350] [[ passive == ordinary ]]
+epprd_rg:clvaryonvg(0.169):datavg[1350] [[ passive == passive ]]
+epprd_rg:clvaryonvg(0.169):datavg[1350] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.169):datavg[1381] : Let us assume that the old style synclvodm would sync all the PV/FS changes.
+epprd_rg:clvaryonvg(0.169):datavg[1383] expimpvg_notrequired=1
+epprd_rg:clvaryonvg(0.169):datavg[1386] : Optimistically give varyonvg a try.
+epprd_rg:clvaryonvg(0.169):datavg[1388] [[ passive == passive ]]
+epprd_rg:clvaryonvg(0.169):datavg[1391] : If the volume group was varyd on in passive mode when this node came
+epprd_rg:clvaryonvg(0.169):datavg[1392] : up, flip it over to active mode. Following logic will then fall
+epprd_rg:clvaryonvg(0.169):datavg[1393] : through to updatefs.
+epprd_rg:clvaryonvg(0.169):datavg[1395] [[ passive == passive ]]
+epprd_rg:clvaryonvg(0.169):datavg[1395] A_flag=-A
+epprd_rg:clvaryonvg(0.169):datavg[1396] varyonvg -n -c -A -O datavg
+epprd_rg:clvaryonvg(0.170):datavg[1396] 2>& 1
+epprd_rg:clvaryonvg(0.396):datavg[1396] varyonvg_output=''
+epprd_rg:clvaryonvg(0.397):datavg[1397] varyonvg_rc=0
+epprd_rg:clvaryonvg(0.397):datavg[1397] typeset -li varyonvg_rc
+epprd_rg:clvaryonvg(0.397):datavg[1399] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.397):datavg[1481] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.397):datavg[1576] : At this point, datavg should be varied on
+epprd_rg:clvaryonvg(0.397):datavg[1578] [[ FALSE == TRUE ]]
+epprd_rg:clvaryonvg(0.397):datavg[1585] [[ -z 63d4e41f29287594 ]]
+epprd_rg:clvaryonvg(0.397):datavg[1592] vgdatimestamps
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:201] PS4_FUNC=vgdatimestamps
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:201] typeset PS4_FUNC
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:202] [[ high == high ]]
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:202] set -x
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:203] set -u
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:206] : See what timestamp LVM has recorded from the last time it checked
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:207] : the disks
+epprd_rg:clvaryonvg(0.397):datavg[vgdatimestamps:209] /usr/sbin/getlvodm -T 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(0.398):datavg[vgdatimestamps:209] 2> /dev/null
+epprd_rg:clvaryonvg(0.400):datavg[vgdatimestamps:209] TS_FROM_ODM=63d4e4ec07aab272
+epprd_rg:clvaryonvg(0.400):datavg[vgdatimestamps:212] : Check to see if HACMP is maintaining a timestamp for this volume group
+epprd_rg:clvaryonvg(0.400):datavg[vgdatimestamps:213] : Needed for some older volume groups
+epprd_rg:clvaryonvg(0.400):datavg[vgdatimestamps:215] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.tstamp ]]
+epprd_rg:clvaryonvg(0.401):datavg[vgdatimestamps:234] : Get the time stamp from the actual disk
+epprd_rg:clvaryonvg(0.401):datavg[vgdatimestamps:236] clvgdats /dev/datavg
+epprd_rg:clvaryonvg(0.401):datavg[vgdatimestamps:236] 2> /dev/null
+epprd_rg:clvaryonvg(0.411):datavg[vgdatimestamps:236] TS_FROM_DISK=63d4e4ec07aab272
+epprd_rg:clvaryonvg(0.411):datavg[vgdatimestamps:237] clvgdats_rc=0
+epprd_rg:clvaryonvg(0.411):datavg[vgdatimestamps:238] (( 0 != 0 ))
+epprd_rg:clvaryonvg(0.411):datavg[vgdatimestamps:247] [[ -z 63d4e4ec07aab272 ]]
+epprd_rg:clvaryonvg(0.411):datavg[1600] [[ 63d4e4ec07aab272 != 63d4e4ec07aab272 ]]
+epprd_rg:clvaryonvg(0.411):datavg[1622] [[ FALSE == TRUE ]]
+epprd_rg:clvaryonvg(0.411):datavg[1633] : Even if everything looks OK, update the local file system
+epprd_rg:clvaryonvg(0.411):datavg[1634] : definitions, since changes there do not show up in the
+epprd_rg:clvaryonvg(0.411):datavg[1635] : VGDA timestamps
+epprd_rg:clvaryonvg(0.411):datavg[1637] updatefs datavg
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:506] PS4_FUNC=updatefs
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:506] typeset PS4_FUNC
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:507] [[ high == high ]]
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:507] set -x
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:508] do_imfs=''
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:508] typeset do_imfs
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:509] has_typed_lvs=''
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:509] typeset has_typed_lvs
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:512] : Delete existing filesystem information for this volume group. This is
+epprd_rg:clvaryonvg(0.411):datavg[updatefs:513] : needed because imfs will not update an existing /etc/filesystems entry.
+epprd_rg:clvaryonvg(0.413):datavg[updatefs:515] cut -f1 '-d '
+epprd_rg:clvaryonvg(0.413):datavg[updatefs:515] /usr/sbin/getlvodm -L datavg
+epprd_rg:clvaryonvg(0.417):datavg[updatefs:515] lv_list=$'saplv\nsapmntlv\noraclelv\nepplv\noraarchlv\nsapdata1lv\nsapdata2lv\nsapdata3lv\nsapdata4lv\nboardlv\noriglogAlv\noriglogBlv\nmirrlogAlv\nmirrlogBlv\nepprdaloglv'
+epprd_rg:clvaryonvg(0.417):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.417):datavg[updatefs:521] clodmget -q 'name = saplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.420):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.420):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.420):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.420):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.420):datavg[updatefs:530] /usr/sbin/getlvcb -f saplv
+epprd_rg:clvaryonvg(0.421):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.439):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.439):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.439):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.441):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.441):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.444):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.445):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.445):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.445):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.446):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.465):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.465):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.465):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.465):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.466):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.466):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.469):datavg[updatefs:545] /usr/sbin/imfs -lx saplv
+epprd_rg:clvaryonvg(0.473):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.473):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.473):datavg[updatefs:521] clodmget -q 'name = sapmntlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.477):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.477):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.477):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.477):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.477):datavg[updatefs:530] /usr/sbin/getlvcb -f sapmntlv
+epprd_rg:clvaryonvg(0.478):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.495):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.495):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.495):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.497):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.497):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.500):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.500):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.500):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.500):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.502):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.520):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.520):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.520):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.520):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.522):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.521):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.525):datavg[updatefs:545] /usr/sbin/imfs -lx sapmntlv
+epprd_rg:clvaryonvg(0.529):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.529):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.529):datavg[updatefs:521] clodmget -q 'name = oraclelv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.532):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.532):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.532):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.532):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.532):datavg[updatefs:530] /usr/sbin/getlvcb -f oraclelv
+epprd_rg:clvaryonvg(0.533):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.551):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.551):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.551):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.553):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.553):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.556):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.556):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.556):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.556):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.558):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.578):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.578):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.578):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.578):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.579):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.579):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.582):datavg[updatefs:545] /usr/sbin/imfs -lx oraclelv
+epprd_rg:clvaryonvg(0.587):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.587):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.587):datavg[updatefs:521] clodmget -q 'name = epplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.590):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.590):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.590):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.590):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.590):datavg[updatefs:530] /usr/sbin/getlvcb -f epplv
+epprd_rg:clvaryonvg(0.591):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.610):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.610):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.610):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.612):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.612):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.615):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.616):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.616):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.616):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.617):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.636):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.636):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.636):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.636):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.637):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.637):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.640):datavg[updatefs:545] /usr/sbin/imfs -lx epplv
+epprd_rg:clvaryonvg(0.645):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.645):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.645):datavg[updatefs:521] clodmget -q 'name = oraarchlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.648):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.648):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.648):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.648):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.648):datavg[updatefs:530] /usr/sbin/getlvcb -f oraarchlv
+epprd_rg:clvaryonvg(0.649):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.667):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.667):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.667):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.669):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.669):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.672):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.672):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.672):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.672):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.674):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.692):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.692):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.692):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.692):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.693):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.693):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.696):datavg[updatefs:545] /usr/sbin/imfs -lx oraarchlv
+epprd_rg:clvaryonvg(0.701):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.701):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.701):datavg[updatefs:521] clodmget -q 'name = sapdata1lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.704):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.704):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.704):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.704):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.704):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata1lv
+epprd_rg:clvaryonvg(0.705):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.722):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.722):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.722):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.724):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.724):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.728):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.728):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.728):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.728):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.729):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.748):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.748):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.748):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.748):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.749):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.749):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.752):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata1lv
+epprd_rg:clvaryonvg(0.756):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.756):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.756):datavg[updatefs:521] clodmget -q 'name = sapdata2lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.760):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.760):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.760):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.760):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.760):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata2lv
+epprd_rg:clvaryonvg(0.761):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.778):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.778):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.778):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.780):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.780):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.784):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.784):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.784):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.784):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.785):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.804):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.804):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.804):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.804):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.805):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.805):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.808):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata2lv
+epprd_rg:clvaryonvg(0.812):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.812):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.812):datavg[updatefs:521] clodmget -q 'name = sapdata3lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.816):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.816):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.816):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.816):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.816):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata3lv
+epprd_rg:clvaryonvg(0.817):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.835):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.835):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.835):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.837):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.837):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.840):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.840):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.840):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.840):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.842):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.861):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.861):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.861):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.861):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.862):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.862):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.865):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata3lv
+epprd_rg:clvaryonvg(0.870):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.870):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.870):datavg[updatefs:521] clodmget -q 'name = sapdata4lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.873):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.873):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.873):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.873):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.873):datavg[updatefs:530] /usr/sbin/getlvcb -f sapdata4lv
+epprd_rg:clvaryonvg(0.874):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.892):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.892):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.892):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.893):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.894):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.897):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.897):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.897):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.897):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.898):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.917):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.917):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.917):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.917):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.918):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.918):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.921):datavg[updatefs:545] /usr/sbin/imfs -lx sapdata4lv
+epprd_rg:clvaryonvg(0.925):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.925):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.925):datavg[updatefs:521] clodmget -q 'name = boardlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.929):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.929):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.929):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.929):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.929):datavg[updatefs:530] /usr/sbin/getlvcb -f boardlv
+epprd_rg:clvaryonvg(0.930):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(0.947):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.947):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(0.947):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(0.949):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(0.949):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(0.952):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(0.952):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(0.952):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(0.953):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(0.954):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(0.973):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(0.973):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(0.973):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(0.973):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(0.974):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(0.974):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(0.977):datavg[updatefs:545] /usr/sbin/imfs -lx boardlv
+epprd_rg:clvaryonvg(0.981):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(0.981):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(0.981):datavg[updatefs:521] clodmget -q 'name = origlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(0.985):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(0.985):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(0.985):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(0.985):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(0.985):datavg[updatefs:530] /usr/sbin/getlvcb -f origlogAlv
+epprd_rg:clvaryonvg(0.986):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.004):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.004):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(1.004):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.006):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.006):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(1.009):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.009):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.009):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.009):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.010):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.029):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.029):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.029):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.029):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.030):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.030):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.033):datavg[updatefs:545] /usr/sbin/imfs -lx origlogAlv
+epprd_rg:clvaryonvg(1.037):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.037):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.037):datavg[updatefs:521] clodmget -q 'name = origlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.041):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.041):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.041):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.041):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.041):datavg[updatefs:530] /usr/sbin/getlvcb -f origlogBlv
+epprd_rg:clvaryonvg(1.042):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.059):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.060):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(1.060):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.061):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.061):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(1.065):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.065):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.065):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.065):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.066):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.085):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.085):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.085):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.085):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.086):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.086):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.089):datavg[updatefs:545] /usr/sbin/imfs -lx origlogBlv
+epprd_rg:clvaryonvg(1.093):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.093):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.093):datavg[updatefs:521] clodmget -q 'name = mirrlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.097):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.097):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.097):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.097):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.097):datavg[updatefs:530] /usr/sbin/getlvcb -f mirrlogAlv
+epprd_rg:clvaryonvg(1.098):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.116):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.116):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(1.116):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.117):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.117):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(1.121):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.121):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.121):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.121):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.122):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.141):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.141):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.141):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.141):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.142):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.142):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.145):datavg[updatefs:545] /usr/sbin/imfs -lx mirrlogAlv
+epprd_rg:clvaryonvg(1.149):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.149):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.149):datavg[updatefs:521] clodmget -q 'name = mirrlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.153):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.153):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.153):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.153):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.153):datavg[updatefs:530] /usr/sbin/getlvcb -f mirrlogBlv
+epprd_rg:clvaryonvg(1.154):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.171):datavg[updatefs:530] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.171):datavg[updatefs:531] [[ -n vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(1.171):datavg[updatefs:531] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.173):datavg[updatefs:532] sed -n 's/.*log=\([^:]*\).*/\1/p'
+epprd_rg:clvaryonvg(1.173):datavg[updatefs:532] echo vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes
+epprd_rg:clvaryonvg(1.176):datavg[updatefs:532] log_lv=/dev/epprdaloglv
+epprd_rg:clvaryonvg(1.176):datavg[updatefs:533] [[ -n /dev/epprdaloglv ]]
+epprd_rg:clvaryonvg(1.176):datavg[updatefs:533] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:clvaryonvg(1.176):datavg[updatefs:533] /usr/sbin/getlvcb -t epprdaloglv
+epprd_rg:clvaryonvg(1.178):datavg[updatefs:533] 1> /dev/null 2>& 1
+epprd_rg:clvaryonvg(1.196):datavg[updatefs:535] : Only delete the file system information if
+epprd_rg:clvaryonvg(1.196):datavg[updatefs:536] : 1. This logical volume is a file system
+epprd_rg:clvaryonvg(1.196):datavg[updatefs:537] : 2. Its LVCB is readable
+epprd_rg:clvaryonvg(1.196):datavg[updatefs:538] : 3. Its logs LVCB is readable
+epprd_rg:clvaryonvg(1.198):datavg[updatefs:540] print -- vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.198):datavg[updatefs:540] grep -q :type=
+epprd_rg:clvaryonvg(1.201):datavg[updatefs:545] /usr/sbin/imfs -lx mirrlogBlv
+epprd_rg:clvaryonvg(1.205):datavg[updatefs:546] do_imfs=true
+epprd_rg:clvaryonvg(1.205):datavg[updatefs:519] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.205):datavg[updatefs:521] clodmget -q 'name = epprdaloglv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.208):datavg[updatefs:521] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.208):datavg[updatefs:526] : Some checks here to ensure that, before we delete the information
+epprd_rg:clvaryonvg(1.208):datavg[updatefs:527] : on a file system from /etc/filesystems, that we have the
+epprd_rg:clvaryonvg(1.208):datavg[updatefs:528] : information to reconstruct it.
+epprd_rg:clvaryonvg(1.208):datavg[updatefs:530] /usr/sbin/getlvcb -f epprdaloglv
+epprd_rg:clvaryonvg(1.209):datavg[updatefs:530] LC_ALL=C
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:530] fs_info=' '
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:531] [[ -n ' ' ]]
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:531] [[ ' ' != *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:552] [[ -n true ]]
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:556] : Pick up any file system changes that may have happened when
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:557] : the volume group was owned by another node. That is, if a
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:558] : local change was made - not through C-SPOC, we whould have no
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:559] : indication it happened.
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:561] [[ -z '' ]]
+epprd_rg:clvaryonvg(1.227):datavg[updatefs:563] /usr/sbin/imfs datavg
+epprd_rg:clvaryonvg(1.900):datavg[updatefs:589] : For a valid file system configuration, the mount point in
+epprd_rg:clvaryonvg(1.900):datavg[updatefs:590] : /etc/filesystems for the logical volume should match the
+epprd_rg:clvaryonvg(1.900):datavg[updatefs:591] : label of the logical volume. The above imfs should have
+epprd_rg:clvaryonvg(1.900):datavg[updatefs:592] : matched those two. Now, check that they match the label
+epprd_rg:clvaryonvg(1.900):datavg[updatefs:593] : for the logical volume as saved in ODM.
+epprd_rg:clvaryonvg(1.900):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.900):datavg[updatefs:600] clodmget -q 'name = saplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.904):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.904):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(1.904):datavg[updatefs:607] /usr/sbin/getlvcb -f saplv
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.923):datavg[updatefs:623] : Label and file system type from LVCB on disk for saplv
+epprd_rg:clvaryonvg(1.924):datavg[updatefs:625] getlvcb -T -A saplv
+epprd_rg:clvaryonvg(1.924):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(1.927):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(1.930):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(1.932):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(1.945):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(1.945):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(1.945):datavg[updatefs:632] : Mount point in /etc/filesystems for saplv
+epprd_rg:clvaryonvg(1.946):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/saplv$' /etc/filesystems
+epprd_rg:clvaryonvg(1.949):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(1.949):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(1.953):datavg[updatefs:634] fs_mount_point=/usr/sap
+epprd_rg:clvaryonvg(1.953):datavg[updatefs:637] : CuAt label attribute for saplv
+epprd_rg:clvaryonvg(1.953):datavg[updatefs:639] clodmget -q 'name = saplv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(1.957):datavg[updatefs:639] CuAt_label=/usr/sap
+epprd_rg:clvaryonvg(1.958):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(1.959):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:657] [[ -z /usr/sap ]]
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:657] [[ /usr/sap == None ]]
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:665] [[ /usr/sap == /usr/sap ]]
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:665] [[ /usr/sap != /usr/sap ]]
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:685] [[ /usr/sap != /usr/sap ]]
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(1.963):datavg[updatefs:600] clodmget -q 'name = sapmntlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(1.966):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(1.966):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(1.966):datavg[updatefs:607] /usr/sbin/getlvcb -f sapmntlv
+epprd_rg:clvaryonvg(1.983):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(1.983):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(1.983):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(1.983):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(1.983):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(1.983):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(1.984):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(1.984):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapmntlv
+epprd_rg:clvaryonvg(1.984):datavg[updatefs:625] getlvcb -T -A sapmntlv
+epprd_rg:clvaryonvg(1.985):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(1.988):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(1.991):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(1.993):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.005):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.005):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.005):datavg[updatefs:632] : Mount point in /etc/filesystems for sapmntlv
+epprd_rg:clvaryonvg(2.007):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapmntlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.009):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.011):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.014):datavg[updatefs:634] fs_mount_point=/sapmnt
+epprd_rg:clvaryonvg(2.014):datavg[updatefs:637] : CuAt label attribute for sapmntlv
+epprd_rg:clvaryonvg(2.014):datavg[updatefs:639] clodmget -q 'name = sapmntlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.018):datavg[updatefs:639] CuAt_label=/sapmnt
+epprd_rg:clvaryonvg(2.019):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.021):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.023):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.023):datavg[updatefs:657] [[ -z /sapmnt ]]
+epprd_rg:clvaryonvg(2.023):datavg[updatefs:657] [[ /sapmnt == None ]]
+epprd_rg:clvaryonvg(2.023):datavg[updatefs:665] [[ /sapmnt == /sapmnt ]]
+epprd_rg:clvaryonvg(2.023):datavg[updatefs:665] [[ /sapmnt != /sapmnt ]]
+epprd_rg:clvaryonvg(2.023):datavg[updatefs:685] [[ /sapmnt != /sapmnt ]]
+epprd_rg:clvaryonvg(2.024):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.024):datavg[updatefs:600] clodmget -q 'name = oraclelv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.027):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.027):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.027):datavg[updatefs:607] /usr/sbin/getlvcb -f oraclelv
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.044):datavg[updatefs:623] : Label and file system type from LVCB on disk for oraclelv
+epprd_rg:clvaryonvg(2.045):datavg[updatefs:625] getlvcb -T -A oraclelv
+epprd_rg:clvaryonvg(2.045):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.048):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.051):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.053):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.066):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.066):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.066):datavg[updatefs:632] : Mount point in /etc/filesystems for oraclelv
+epprd_rg:clvaryonvg(2.068):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/oraclelv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.070):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.072):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.075):datavg[updatefs:634] fs_mount_point=/oracle
+epprd_rg:clvaryonvg(2.075):datavg[updatefs:637] : CuAt label attribute for oraclelv
+epprd_rg:clvaryonvg(2.075):datavg[updatefs:639] clodmget -q 'name = oraclelv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.078):datavg[updatefs:639] CuAt_label=/oracle
+epprd_rg:clvaryonvg(2.080):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.081):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:657] [[ -z /oracle ]]
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:657] [[ /oracle == None ]]
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:665] [[ /oracle == /oracle ]]
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:665] [[ /oracle != /oracle ]]
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:685] [[ /oracle != /oracle ]]
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.084):datavg[updatefs:600] clodmget -q 'name = epplv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.087):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.087):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.087):datavg[updatefs:607] /usr/sbin/getlvcb -f epplv
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.105):datavg[updatefs:623] : Label and file system type from LVCB on disk for epplv
+epprd_rg:clvaryonvg(2.106):datavg[updatefs:625] getlvcb -T -A epplv
+epprd_rg:clvaryonvg(2.107):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.110):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.113):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.114):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.128):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.128):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.128):datavg[updatefs:632] : Mount point in /etc/filesystems for epplv
+epprd_rg:clvaryonvg(2.129):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/epplv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.132):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.133):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.137):datavg[updatefs:634] fs_mount_point=/oracle/EPP
+epprd_rg:clvaryonvg(2.137):datavg[updatefs:637] : CuAt label attribute for epplv
+epprd_rg:clvaryonvg(2.137):datavg[updatefs:639] clodmget -q 'name = epplv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.140):datavg[updatefs:639] CuAt_label=/oracle/EPP
+epprd_rg:clvaryonvg(2.142):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.143):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:657] [[ -z /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:657] [[ /oracle/EPP == None ]]
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:665] [[ /oracle/EPP == /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:665] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:685] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.146):datavg[updatefs:600] clodmget -q 'name = oraarchlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.149):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.149):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.149):datavg[updatefs:607] /usr/sbin/getlvcb -f oraarchlv
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.167):datavg[updatefs:623] : Label and file system type from LVCB on disk for oraarchlv
+epprd_rg:clvaryonvg(2.168):datavg[updatefs:625] getlvcb -T -A oraarchlv
+epprd_rg:clvaryonvg(2.168):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.171):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.174):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.176):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.189):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.189):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.189):datavg[updatefs:632] : Mount point in /etc/filesystems for oraarchlv
+epprd_rg:clvaryonvg(2.190):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/oraarchlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.193):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.194):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.198):datavg[updatefs:634] fs_mount_point=/oracle/EPP/oraarch
+epprd_rg:clvaryonvg(2.198):datavg[updatefs:637] : CuAt label attribute for oraarchlv
+epprd_rg:clvaryonvg(2.198):datavg[updatefs:639] clodmget -q 'name = oraarchlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.201):datavg[updatefs:639] CuAt_label=/oracle/EPP/oraarch
+epprd_rg:clvaryonvg(2.202):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.204):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:657] [[ -z /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:657] [[ /oracle/EPP/oraarch == None ]]
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:665] [[ /oracle/EPP/oraarch == /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:665] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:685] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.207):datavg[updatefs:600] clodmget -q 'name = sapdata1lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.210):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.210):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.210):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata1lv
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.228):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata1lv
+epprd_rg:clvaryonvg(2.229):datavg[updatefs:625] getlvcb -T -A sapdata1lv
+epprd_rg:clvaryonvg(2.229):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.232):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.235):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.237):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.250):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.250):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.250):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata1lv
+epprd_rg:clvaryonvg(2.251):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata1lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.254):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.255):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.259):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata1
+epprd_rg:clvaryonvg(2.259):datavg[updatefs:637] : CuAt label attribute for sapdata1lv
+epprd_rg:clvaryonvg(2.259):datavg[updatefs:639] clodmget -q 'name = sapdata1lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.262):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata1
+epprd_rg:clvaryonvg(2.267):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.268):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.271):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.271):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.271):datavg[updatefs:657] [[ /oracle/EPP/sapdata1 == None ]]
+epprd_rg:clvaryonvg(2.272):datavg[updatefs:665] [[ /oracle/EPP/sapdata1 == /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.272):datavg[updatefs:665] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.272):datavg[updatefs:685] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:clvaryonvg(2.272):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.272):datavg[updatefs:600] clodmget -q 'name = sapdata2lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.275):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.275):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.275):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata2lv
+epprd_rg:clvaryonvg(2.293):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.294):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.294):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.294):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.294):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.294):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.294):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.294):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata2lv
+epprd_rg:clvaryonvg(2.295):datavg[updatefs:625] getlvcb -T -A sapdata2lv
+epprd_rg:clvaryonvg(2.295):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.298):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.301):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.303):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.316):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.316):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.316):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata2lv
+epprd_rg:clvaryonvg(2.317):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata2lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.320):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.321):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.324):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata2
+epprd_rg:clvaryonvg(2.324):datavg[updatefs:637] : CuAt label attribute for sapdata2lv
+epprd_rg:clvaryonvg(2.324):datavg[updatefs:639] clodmget -q 'name = sapdata2lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.328):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata2
+epprd_rg:clvaryonvg(2.329):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.330):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.333):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.333):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.333):datavg[updatefs:657] [[ /oracle/EPP/sapdata2 == None ]]
+epprd_rg:clvaryonvg(2.334):datavg[updatefs:665] [[ /oracle/EPP/sapdata2 == /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.334):datavg[updatefs:665] [[ /oracle/EPP/sapdata2 != /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.334):datavg[updatefs:685] [[ /oracle/EPP/sapdata2 != /oracle/EPP/sapdata2 ]]
+epprd_rg:clvaryonvg(2.334):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.334):datavg[updatefs:600] clodmget -q 'name = sapdata3lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.337):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.337):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.337):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata3lv
+epprd_rg:clvaryonvg(2.354):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.354):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.354):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.355):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.355):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.355):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.355):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.355):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata3lv
+epprd_rg:clvaryonvg(2.356):datavg[updatefs:625] getlvcb -T -A sapdata3lv
+epprd_rg:clvaryonvg(2.356):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.359):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.362):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.364):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.376):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.376):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.376):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata3lv
+epprd_rg:clvaryonvg(2.378):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata3lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.380):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.382):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.385):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata3
+epprd_rg:clvaryonvg(2.385):datavg[updatefs:637] : CuAt label attribute for sapdata3lv
+epprd_rg:clvaryonvg(2.385):datavg[updatefs:639] clodmget -q 'name = sapdata3lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.389):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata3
+epprd_rg:clvaryonvg(2.390):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.391):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.394):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.394):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.394):datavg[updatefs:657] [[ /oracle/EPP/sapdata3 == None ]]
+epprd_rg:clvaryonvg(2.394):datavg[updatefs:665] [[ /oracle/EPP/sapdata3 == /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.394):datavg[updatefs:665] [[ /oracle/EPP/sapdata3 != /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.394):datavg[updatefs:685] [[ /oracle/EPP/sapdata3 != /oracle/EPP/sapdata3 ]]
+epprd_rg:clvaryonvg(2.394):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.395):datavg[updatefs:600] clodmget -q 'name = sapdata4lv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.398):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.398):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.398):datavg[updatefs:607] /usr/sbin/getlvcb -f sapdata4lv
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.415):datavg[updatefs:623] : Label and file system type from LVCB on disk for sapdata4lv
+epprd_rg:clvaryonvg(2.416):datavg[updatefs:625] getlvcb -T -A sapdata4lv
+epprd_rg:clvaryonvg(2.417):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.420):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.423):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.425):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.438):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.438):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.438):datavg[updatefs:632] : Mount point in /etc/filesystems for sapdata4lv
+epprd_rg:clvaryonvg(2.440):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/sapdata4lv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.442):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.444):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.447):datavg[updatefs:634] fs_mount_point=/oracle/EPP/sapdata4
+epprd_rg:clvaryonvg(2.447):datavg[updatefs:637] : CuAt label attribute for sapdata4lv
+epprd_rg:clvaryonvg(2.447):datavg[updatefs:639] clodmget -q 'name = sapdata4lv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.450):datavg[updatefs:639] CuAt_label=/oracle/EPP/sapdata4
+epprd_rg:clvaryonvg(2.452):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.453):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:657] [[ -z /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:657] [[ /oracle/EPP/sapdata4 == None ]]
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:665] [[ /oracle/EPP/sapdata4 == /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:665] [[ /oracle/EPP/sapdata4 != /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:685] [[ /oracle/EPP/sapdata4 != /oracle/EPP/sapdata4 ]]
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.456):datavg[updatefs:600] clodmget -q 'name = boardlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.459):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.459):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.459):datavg[updatefs:607] /usr/sbin/getlvcb -f boardlv
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.477):datavg[updatefs:623] : Label and file system type from LVCB on disk for boardlv
+epprd_rg:clvaryonvg(2.478):datavg[updatefs:625] getlvcb -T -A boardlv
+epprd_rg:clvaryonvg(2.478):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.481):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.484):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.486):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.498):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.498):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.498):datavg[updatefs:632] : Mount point in /etc/filesystems for boardlv
+epprd_rg:clvaryonvg(2.500):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/boardlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.502):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.504):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.508):datavg[updatefs:634] fs_mount_point=/board_org
+epprd_rg:clvaryonvg(2.508):datavg[updatefs:637] : CuAt label attribute for boardlv
+epprd_rg:clvaryonvg(2.508):datavg[updatefs:639] clodmget -q 'name = boardlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.512):datavg[updatefs:639] CuAt_label=/board_org
+epprd_rg:clvaryonvg(2.515):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.517):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.520):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.520):datavg[updatefs:657] [[ -z /board_org ]]
+epprd_rg:clvaryonvg(2.520):datavg[updatefs:657] [[ /board_org == None ]]
+epprd_rg:clvaryonvg(2.520):datavg[updatefs:665] [[ /board_org == /board_org ]]
+epprd_rg:clvaryonvg(2.521):datavg[updatefs:665] [[ /board_org != /board_org ]]
+epprd_rg:clvaryonvg(2.521):datavg[updatefs:685] [[ /board_org != /board_org ]]
+epprd_rg:clvaryonvg(2.521):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.521):datavg[updatefs:600] clodmget -q 'name = origlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.524):datavg[updatefs:607] /usr/sbin/getlvcb -f origlogAlv
+epprd_rg:clvaryonvg(2.540):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.540):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.540):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.540):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.540):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.540):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.540):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.541):datavg[updatefs:623] : Label and file system type from LVCB on disk for origlogAlv
+epprd_rg:clvaryonvg(2.541):datavg[updatefs:625] getlvcb -T -A origlogAlv
+epprd_rg:clvaryonvg(2.542):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.545):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.548):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.550):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.562):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.562):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.562):datavg[updatefs:632] : Mount point in /etc/filesystems for origlogAlv
+epprd_rg:clvaryonvg(2.563):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/origlogAlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.566):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.567):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.570):datavg[updatefs:634] fs_mount_point=/oracle/EPP/origlogA
+epprd_rg:clvaryonvg(2.570):datavg[updatefs:637] : CuAt label attribute for origlogAlv
+epprd_rg:clvaryonvg(2.570):datavg[updatefs:639] clodmget -q 'name = origlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.574):datavg[updatefs:639] CuAt_label=/oracle/EPP/origlogA
+epprd_rg:clvaryonvg(2.575):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.576):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:657] [[ -z /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:657] [[ /oracle/EPP/origlogA == None ]]
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:665] [[ /oracle/EPP/origlogA == /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:665] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:685] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.580):datavg[updatefs:600] clodmget -q 'name = origlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.583):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.583):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.583):datavg[updatefs:607] /usr/sbin/getlvcb -f origlogBlv
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.601):datavg[updatefs:623] : Label and file system type from LVCB on disk for origlogBlv
+epprd_rg:clvaryonvg(2.602):datavg[updatefs:625] getlvcb -T -A origlogBlv
+epprd_rg:clvaryonvg(2.602):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.605):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.608):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.610):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.623):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.623):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.623):datavg[updatefs:632] : Mount point in /etc/filesystems for origlogBlv
+epprd_rg:clvaryonvg(2.624):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/origlogBlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.627):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.628):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.632):datavg[updatefs:634] fs_mount_point=/oracle/EPP/origlogB
+epprd_rg:clvaryonvg(2.632):datavg[updatefs:637] : CuAt label attribute for origlogBlv
+epprd_rg:clvaryonvg(2.632):datavg[updatefs:639] clodmget -q 'name = origlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.635):datavg[updatefs:639] CuAt_label=/oracle/EPP/origlogB
+epprd_rg:clvaryonvg(2.636):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.638):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:657] [[ -z /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:657] [[ /oracle/EPP/origlogB == None ]]
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:665] [[ /oracle/EPP/origlogB == /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:665] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:685] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.641):datavg[updatefs:600] clodmget -q 'name = mirrlogAlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.644):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.644):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.644):datavg[updatefs:607] /usr/sbin/getlvcb -f mirrlogAlv
+epprd_rg:clvaryonvg(2.661):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.662):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.662):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.662):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.662):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.662):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.662):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.662):datavg[updatefs:623] : Label and file system type from LVCB on disk for mirrlogAlv
+epprd_rg:clvaryonvg(2.663):datavg[updatefs:625] getlvcb -T -A mirrlogAlv
+epprd_rg:clvaryonvg(2.663):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.666):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.669):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.671):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.684):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.684):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.684):datavg[updatefs:632] : Mount point in /etc/filesystems for mirrlogAlv
+epprd_rg:clvaryonvg(2.685):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/mirrlogAlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.688):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.689):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.692):datavg[updatefs:634] fs_mount_point=/oracle/EPP/mirrlogA
+epprd_rg:clvaryonvg(2.692):datavg[updatefs:637] : CuAt label attribute for mirrlogAlv
+epprd_rg:clvaryonvg(2.693):datavg[updatefs:639] clodmget -q 'name = mirrlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.696):datavg[updatefs:639] CuAt_label=/oracle/EPP/mirrlogA
+epprd_rg:clvaryonvg(2.697):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.699):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:657] [[ -z /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:657] [[ /oracle/EPP/mirrlogA == None ]]
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:665] [[ /oracle/EPP/mirrlogA == /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:665] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:685] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.702):datavg[updatefs:600] clodmget -q 'name = mirrlogBlv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.705):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.705):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.705):datavg[updatefs:607] /usr/sbin/getlvcb -f mirrlogBlv
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:607] fs_info=vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes '
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:618] [[ -z vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' ]]
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:618] [[ vfs='jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.722):datavg[updatefs:623] : Label and file system type from LVCB on disk for mirrlogBlv
+epprd_rg:clvaryonvg(2.723):datavg[updatefs:625] getlvcb -T -A mirrlogBlv
+epprd_rg:clvaryonvg(2.724):datavg[updatefs:625] LC_ALL=C
+epprd_rg:clvaryonvg(2.727):datavg[updatefs:625] egrep -w 'label =|type ='
+epprd_rg:clvaryonvg(2.730):datavg[updatefs:625] paste -s - -
+epprd_rg:clvaryonvg(2.732):datavg[updatefs:625] read skip skip lvcb_label skip skip lvcb_type rest
+epprd_rg:clvaryonvg(2.744):datavg[updatefs:626] [[ jfs2 != jfs ]]
+epprd_rg:clvaryonvg(2.744):datavg[updatefs:626] [[ jfs2 != jfs2 ]]
+epprd_rg:clvaryonvg(2.744):datavg[updatefs:632] : Mount point in /etc/filesystems for mirrlogBlv
+epprd_rg:clvaryonvg(2.746):datavg[updatefs:634] egrep -p '^([[:space:]])*dev([[:space:]])*= /dev/mirrlogBlv$' /etc/filesystems
+epprd_rg:clvaryonvg(2.749):datavg[updatefs:634] cut -f1 -d:
+epprd_rg:clvaryonvg(2.749):datavg[updatefs:634] head -1
+epprd_rg:clvaryonvg(2.753):datavg[updatefs:634] fs_mount_point=/oracle/EPP/mirrlogB
+epprd_rg:clvaryonvg(2.753):datavg[updatefs:637] : CuAt label attribute for mirrlogBlv
+epprd_rg:clvaryonvg(2.753):datavg[updatefs:639] clodmget -q 'name = mirrlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:clvaryonvg(2.757):datavg[updatefs:639] CuAt_label=/oracle/EPP/mirrlogB
+epprd_rg:clvaryonvg(2.758):datavg[updatefs:640] print -- CuAt_label
+epprd_rg:clvaryonvg(2.759):datavg[updatefs:640] wc -l
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:640] (( 1 != 1 ))
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:657] [[ -z /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:657] [[ /oracle/EPP/mirrlogB == None ]]
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:665] [[ /oracle/EPP/mirrlogB == /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:665] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:685] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:598] : Skip filesystem update for raw logical volumes
+epprd_rg:clvaryonvg(2.763):datavg[updatefs:600] clodmget -q 'name = epprdaloglv and attribute = type and value = raw' -f value -n CuAt
+epprd_rg:clvaryonvg(2.766):datavg[updatefs:600] [[ -n '' ]]
+epprd_rg:clvaryonvg(2.766):datavg[updatefs:605] : Skip logical volumes for which getlvcb fails
+epprd_rg:clvaryonvg(2.766):datavg[updatefs:607] /usr/sbin/getlvcb -f epprdaloglv
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:607] fs_info=' '
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:608] cmd_rc=0
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:608] typeset -i cmd_rc
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:609] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:615] : Skip logical volumes not associated with file systems
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:618] [[ -z ' ' ]]
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:618] [[ ' ' == *([[:space:]]) ]]
+epprd_rg:clvaryonvg(2.783):datavg[updatefs:620] continue
+epprd_rg:clvaryonvg(2.784):datavg[1641] : At this point, the volume should be varied on, so get the current
+epprd_rg:clvaryonvg(2.784):datavg[1642] : timestamp if needed
+epprd_rg:clvaryonvg(2.784):datavg[1644] vgdatimestamps
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:201] PS4_FUNC=vgdatimestamps
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:201] typeset PS4_FUNC
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:202] [[ high == high ]]
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:202] set -x
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:203] set -u
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:206] : See what timestamp LVM has recorded from the last time it checked
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:207] : the disks
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:209] /usr/sbin/getlvodm -T 00c44af100004b00000001851e9dc053
+epprd_rg:clvaryonvg(2.784):datavg[vgdatimestamps:209] 2> /dev/null
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:209] TS_FROM_ODM=63d4e4ec07aab272
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:212] : Check to see if HACMP is maintaining a timestamp for this volume group
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:213] : Needed for some older volume groups
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:215] [[ -s /usr/es/sbin/cluster/etc/vg/datavg.tstamp ]]
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:234] : Get the time stamp from the actual disk
+epprd_rg:clvaryonvg(2.787):datavg[vgdatimestamps:236] clvgdats /dev/datavg
+epprd_rg:clvaryonvg(2.788):datavg[vgdatimestamps:236] 2> /dev/null
+epprd_rg:clvaryonvg(2.797):datavg[vgdatimestamps:236] TS_FROM_DISK=63d4e4ec07aab272
+epprd_rg:clvaryonvg(2.797):datavg[vgdatimestamps:237] clvgdats_rc=0
+epprd_rg:clvaryonvg(2.797):datavg[vgdatimestamps:238] (( 0 != 0 ))
+epprd_rg:clvaryonvg(2.797):datavg[vgdatimestamps:247] [[ -z 63d4e4ec07aab272 ]]
+epprd_rg:clvaryonvg(2.797):datavg[1645] [[ -z 63d4e4ec07aab272 ]]
+epprd_rg:clvaryonvg(2.797):datavg[1656] : Finally, leave the volume in the requested state - on or off
+epprd_rg:clvaryonvg(2.797):datavg[1658] [[ FALSE == TRUE ]]
+epprd_rg:clvaryonvg(2.797):datavg[1665] (( 0 == 0 ))
+epprd_rg:clvaryonvg(2.797):datavg[1668] : Synchronize time stamps globally
+epprd_rg:clvaryonvg(2.797):datavg[1670] cl_update_vg_odm_ts -o datavg
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[77] version=1.13
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[121] o_flag=''
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[122] f_flag=''
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[123] getopts :of option
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[126] : Local timestamps should be good, since volume group was
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[127] : just varyied on or off
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[128] o_flag=TRUE
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[123] getopts :of option
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[142] shift 1
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[144] vg_name=datavg
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[145] [[ -z datavg ]]
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[151] shift
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[152] node_list=''
+epprd_rg:cl_update_vg_odm_ts(0.001):datavg[153] /usr/es/sbin/cluster/utilities/cl_get_path all
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[153] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[155] [[ -z '' ]]
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[158] : Check to see if this update is necessary - some LVM levels automatically
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[159] : update volume group timestamps clusterwide.
+epprd_rg:cl_update_vg_odm_ts(0.004):datavg[163] instfix -iqk IV74100
+epprd_rg:cl_update_vg_odm_ts(0.005):datavg[163] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.012):datavg[164] instfix -iqk IV74883
+epprd_rg:cl_update_vg_odm_ts(0.012):datavg[164] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.019):datavg[165] instfix -iqk IV74698
+epprd_rg:cl_update_vg_odm_ts(0.020):datavg[165] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.026):datavg[166] instfix -iqk IV74246
+epprd_rg:cl_update_vg_odm_ts(0.027):datavg[166] 1> /dev/null 2>& 1
+epprd_rg:cl_update_vg_odm_ts(0.033):datavg[174] emgr -l -L IV74883
+epprd_rg:cl_update_vg_odm_ts(0.034):datavg[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.303):datavg[174] emgr -l -L IV74698
+epprd_rg:cl_update_vg_odm_ts(0.304):datavg[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.571):datavg[174] emgr -l -L IV74246
+epprd_rg:cl_update_vg_odm_ts(0.571):datavg[174] 2> /dev/null +epprd_rg:cl_update_vg_odm_ts(0.820):datavg[183] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_update_vg_odm_ts(0.820):datavg[184] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_update_vg_odm_ts(0.820):datavg[185] : 99.99.999.999
+epprd_rg:cl_update_vg_odm_ts(0.820):datavg[187] typeset -li V R M F
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[188] typeset -Z2 V
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[189] typeset -Z2 R
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[190] typeset -Z3 M
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[191] typeset -Z3 F
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[192] lvm_lvl6=601008015
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[192] typeset -li lvm_lvl6
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[194] lvm_lvl7=701003046
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[194] typeset -li lvm_lvl7
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[195] VRMF=0
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[195] typeset -li VRMF
+epprd_rg:cl_update_vg_odm_ts(0.821):datavg[198] : Here try and figure out what level of LVM is installed
+epprd_rg:cl_update_vg_odm_ts(0.822):datavg[200] lslpp -lcqOr bos.rte.lvm
+epprd_rg:cl_update_vg_odm_ts(0.823):datavg[200] cut -f3 -d:
+epprd_rg:cl_update_vg_odm_ts(0.823):datavg[200] read V R M F
+epprd_rg:cl_update_vg_odm_ts(0.823):datavg[200] IFS=.
+epprd_rg:cl_update_vg_odm_ts(0.824):datavg[201] VRMF=0702005101
+epprd_rg:cl_update_vg_odm_ts(0.824):datavg[203] (( 7 == 6 && 702005101 >= 601008015 ))
+epprd_rg:cl_update_vg_odm_ts(0.824):datavg[204] (( 702005101 >= 701003046 ))
+epprd_rg:cl_update_vg_odm_ts(0.824):datavg[207] : LVM at a level in which timestamp update is unnecessary
+epprd_rg:cl_update_vg_odm_ts(0.824):datavg[209] return 0
+epprd_rg:clvaryonvg(3.626):datavg[1674] : On successful varyon, clean up any files used to track errors with
+epprd_rg:clvaryonvg(3.626):datavg[1675] : this volume group
+epprd_rg:clvaryonvg(3.626):datavg[1677] rm -f /usr/es/sbin/cluster/etc/vg/datavg.desc /usr/es/sbin/cluster/etc/vg/datavg.replay /usr/es/sbin/cluster/etc/vg/datavg.perms /usr/es/sbin/cluster/etc/vg/datavg.tstamp /usr/es/sbin/cluster/etc/vg/datavg.fail
+epprd_rg:clvaryonvg(3.629):datavg[1680] : Note that a sync has not been done on the volume group at this point.
+epprd_rg:clvaryonvg(3.629):datavg[1681] : A sync is kicked off in cl_sync_vgs, once any filesystem mounts are
+epprd_rg:clvaryonvg(3.629):datavg[1682] : complete. A sync at this time would interfere with the mounts
+epprd_rg:clvaryonvg(3.629):datavg[1685] return 0
+epprd_rg:cl_activate_vgs(3.713):datavg[vgs_chk:103] ERRMSG=$'cl_set_vg_fence_height[126]: version @(#)10\t1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37\ncl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)\ncl_set_vg_fence_height[214]: read(datavg, 16)\ncl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)\ncl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0))'
+epprd_rg:cl_activate_vgs(3.713):datavg[vgs_chk:104] RC=0
+epprd_rg:cl_activate_vgs(3.713):datavg[vgs_chk:107] (( 0 == 1 || 0 == 20 ))
+epprd_rg:cl_activate_vgs(3.713):datavg[vgs_chk:115] : exit status of clvaryonvg -n datavg: 0
+epprd_rg:cl_activate_vgs(3.713):datavg[vgs_chk:117] [[ -n $'cl_set_vg_fence_height[126]: version @(#)10\t1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37\ncl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)\ncl_set_vg_fence_height[214]: read(datavg, 16)\ncl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)\ncl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0))' ]]
+epprd_rg:cl_activate_vgs(3.713):datavg[vgs_chk:117] (( 0 != 1 ))
+epprd_rg:cl_activate_vgs(3.714):datavg[vgs_chk:119] cl_echo 286 $'cl_activate_vgs: Successful clvaryonvg of datavg with message cl_set_vg_fence_height[126]: version @(#)10\t1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37\ncl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)\ncl_set_vg_fence_height[214]: read(datavg, 16)\ncl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)\ncl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0)).' cl_activate_vgs datavg 'cl_set_vg_fence_height[126]:' version '@(#)10' 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37 'cl_set_vg_fence_height[180]:' 'open(/usr/es/sbin/cluster/etc/vg/datavg.uuid,' 'O_RDONLY)' 'cl_set_vg_fence_height[214]:' 'read(datavg,' '16)' 'cl_set_vg_fence_height[237]:' 'close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)' 'cl_set_vg_fence_height[265]:' 'sfwSetFenceGroup(vg=datavg' uuid=ec2db4422261eae02091227fb9e53c88 height='rw(0))'
Jan 28 2023 18:03:43cl_activate_vgs: Successful clvaryonvg of datavg with message cl_set_vg_fence_height[126]: version @(#)10 1.5 src/43haes/usr/sbin/cluster/events/utils/cl_set_vg_fence_height.c, hacmp, 61haes_r714 4/12/13 13:18:37
cl_set_vg_fence_height[180]: open(/usr/es/sbin/cluster/etc/vg/datavg.uuid, O_RDONLY)
cl_set_vg_fence_height[214]: read(datavg, 16)
cl_set_vg_fence_height[237]: close(/usr/es/sbin/cluster/etc/vg/datavg.uuid)
cl_set_vg_fence_height[265]: sfwSetFenceGroup(vg=datavg uuid=ec2db4422261eae02091227fb9e53c88 height=rw(0)).+epprd_rg:cl_activate_vgs(3.733):datavg[vgs_chk:123] [[ 0 != 0 ]]
+epprd_rg:cl_activate_vgs(3.733):datavg[vgs_chk:127] amlog_trace '' 'Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(3.733):datavg[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_vgs(3.733):datavg[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_vgs(3.758):datavg[amlog_trace:319] cltime
+epprd_rg:cl_activate_vgs(3.761):datavg[amlog_trace:319] DATE=2023-01-28T18:03:43.599659
+epprd_rg:cl_activate_vgs(3.761):datavg[amlog_trace:320] echo '|2023-01-28T18:03:43.599659|INFO: Activating Volume Group|datavg'
+epprd_rg:cl_activate_vgs(3.761):datavg[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_vgs(3.761):datavg[vgs_chk:132] echo datavg 0
+epprd_rg:cl_activate_vgs(3.761):datavg[vgs_chk:132] 1>> /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs(3.761):datavg[vgs_chk:133] return 0
+epprd_rg:cl_activate_vgs:datavg[vgs_list:198] unset PS4_LOOP PS4_TIMER
+epprd_rg:cl_activate_vgs[304] wait
+epprd_rg:cl_activate_vgs[310] ALLNOERRVGS=All_nonerror_volume_groups
+epprd_rg:cl_activate_vgs[311] cl_RMupdate resource_up All_nonerror_volume_groups cl_activate_vgs
2023-01-28T18:03:43.622702
2023-01-28T18:03:43.627172
+epprd_rg:cl_activate_vgs[318] [[ -f /tmp/_activate_vgs.tmp ]]
+epprd_rg:cl_activate_vgs[320] grep ' 1' /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[329] rm -f /tmp/_activate_vgs.tmp
+epprd_rg:cl_activate_vgs[332] exit 0
+epprd_rg:process_resources[process_volume_groups:2584] RC=0
+epprd_rg:process_resources[process_volume_groups:2585] (( 0 != 0 && 0 != 11 ))
+epprd_rg:process_resources[process_volume_groups:2598] (( 0 != 0 ))
+epprd_rg:process_resources[process_volume_groups:2627] return 0
+epprd_rg:process_resources[process_volume_groups_main:2556] STAT=0
+epprd_rg:process_resources[process_volume_groups_main:2559] return 0
+epprd_rg:process_resources[3572] RC=0
+epprd_rg:process_resources[3573] [[ ACQUIRE == RELEASE ]]
+epprd_rg:process_resources[3324] true
+epprd_rg:process_resources[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources[3328] set -a
+epprd_rg:process_resources[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:03:43.645862 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources[3329] eval JOB_TYPE=LOGREDO ACTION=ACQUIRE VOLUME_GROUPS='"datavg"' RESOURCE_GROUPS='"epprd_rg' '"'
+epprd_rg:process_resources[1] JOB_TYPE=LOGREDO
+epprd_rg:process_resources[1] ACTION=ACQUIRE
+epprd_rg:process_resources[1] VOLUME_GROUPS=datavg
+epprd_rg:process_resources[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources[3330] RC=0
+epprd_rg:process_resources[3331] set +a
+epprd_rg:process_resources[3333] (( 0 != 0 ))
+epprd_rg:process_resources[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources[3343] export GROUPNAME
+epprd_rg:process_resources[3353] IS_SERVICE_START=1
+epprd_rg:process_resources[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources[3360] [[ LOGREDO == RELEASE ]]
+epprd_rg:process_resources[3360] [[ LOGREDO == ONLINE ]]
+epprd_rg:process_resources[3634] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources[3635] logredo_volume_groups
+epprd_rg:process_resources[logredo_volume_groups:2745] PS4_FUNC=logredo_volume_groups
+epprd_rg:process_resources[logredo_volume_groups:2745] typeset PS4_FUNC
+epprd_rg:process_resources(4.649)[logredo_volume_groups:2746] PS4_TIMER=true
+epprd_rg:process_resources(4.649)[logredo_volume_groups:2746] typeset PS4_TIMER
+epprd_rg:process_resources(4.649)[logredo_volume_groups:2747] [[ high == high ]]
+epprd_rg:process_resources(4.649)[logredo_volume_groups:2747] set -x
+epprd_rg:process_resources(4.649)[logredo_volume_groups:2749] TMP_FILE=/var/hacmp/log/.process_resources_logredo.28836342
+epprd_rg:process_resources(4.649)[logredo_volume_groups:2749] export TMP_FILE
+epprd_rg:process_resources(4.649)[logredo_volume_groups:2750] rm -f '/var/hacmp/log/.process_resources_logredo*'
+epprd_rg:process_resources(4.652)[logredo_volume_groups:2752] STAT=0
+epprd_rg:process_resources(4.652)[logredo_volume_groups:2755] export GROUPNAME
+epprd_rg:process_resources(4.653)[logredo_volume_groups:2757] get_list_head datavg
+epprd_rg:process_resources(4.654)[get_list_head:59] PS4_FUNC=get_list_head
+epprd_rg:process_resources(4.654)[get_list_head:59] typeset PS4_FUNC
+epprd_rg:process_resources(4.654)[get_list_head:60] [[ high == high ]]
+epprd_rg:process_resources(4.654)[get_list_head:60] set -x
+epprd_rg:process_resources(4.655)[get_list_head:61] echo datavg
+epprd_rg:process_resources(4.655)[get_list_head:61] read listhead listtail
+epprd_rg:process_resources(4.655)[get_list_head:61] IFS=:
+epprd_rg:process_resources(4.656)[get_list_head:62] tr , ' '
+epprd_rg:process_resources(4.656)[get_list_head:62] echo datavg
+epprd_rg:process_resources(4.654)[logredo_volume_groups:2757] read LIST_OF_VOLUME_GROUPS_FOR_RG
+epprd_rg:process_resources(4.659)[logredo_volume_groups:2758] get_list_tail datavg
+epprd_rg:process_resources(4.660)[get_list_tail:67] PS4_FUNC=get_list_tail
+epprd_rg:process_resources(4.660)[get_list_tail:67] typeset PS4_FUNC
+epprd_rg:process_resources(4.660)[get_list_tail:68] [[ high == high ]]
+epprd_rg:process_resources(4.660)[get_list_tail:68] set -x
+epprd_rg:process_resources(4.661)[get_list_tail:69] echo datavg
+epprd_rg:process_resources(4.661)[get_list_tail:69] read listhead listtail
+epprd_rg:process_resources(4.661)[get_list_tail:69] IFS=:
+epprd_rg:process_resources(4.661)[get_list_tail:70] echo
+epprd_rg:process_resources(4.659)[logredo_volume_groups:2758] read VOLUME_GROUPS
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2761] : Run logredo on all JFS/JFS2 log devices to assure FS consistency
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2763] ALL_LVs=''
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2764] lv_all=''
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2765] mount_fs=''
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2766] fsck_check=''
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2767] MOUNTGUARD=''
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2768] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.661)[logredo_volume_groups:2769] FMMOUNT=''
+epprd_rg:process_resources(4.663)[logredo_volume_groups:2772] tail +3
+epprd_rg:process_resources(4.663)[logredo_volume_groups:2772] lsvg -lL datavg
+epprd_rg:process_resources(4.663)[logredo_volume_groups:2772] LC_ALL=C
+epprd_rg:process_resources(4.664)[logredo_volume_groups:2772] 1>> /var/hacmp/log/.process_resources_logredo.28836342
+epprd_rg:process_resources(4.687)[logredo_volume_groups:2774] awk '{print $1}'
+epprd_rg:process_resources(4.687)[logredo_volume_groups:2774] cat /var/hacmp/log/.process_resources_logredo.28836342
+epprd_rg:process_resources(4.692)[logredo_volume_groups:2774] ALL_LVs=$'epprdaloglv\nsaplv\nsapmntlv\noraclelv\nepplv\noraarchlv\nsapdata1lv\nsapdata2lv\nsapdata3lv\nsapdata4lv\nboardlv\noriglogAlv\noriglogBlv\nmirrlogAlv\nmirrlogBlv'
+epprd_rg:process_resources(4.692)[logredo_volume_groups:2777] : Verify if any of the file system associated with volume group datavg
+epprd_rg:process_resources(4.692)[logredo_volume_groups:2778] : is already mounted anywhere else in the cluster.
+epprd_rg:process_resources(4.692)[logredo_volume_groups:2779] : If it is already mounted somewhere else, we dont want to continue
+epprd_rg:process_resources(4.692)[logredo_volume_groups:2780] : here to avoid data corruption.
+epprd_rg:process_resources(4.694)[logredo_volume_groups:2782] awk '{print $1}'
+epprd_rg:process_resources(4.694)[logredo_volume_groups:2782] cat /var/hacmp/log/.process_resources_logredo.28836342
+epprd_rg:process_resources(4.694)[logredo_volume_groups:2782] grep -v N/A
+epprd_rg:process_resources(4.699)[logredo_volume_groups:2782] lv_all=$'saplv\nsapmntlv\noraclelv\nepplv\noraarchlv\nsapdata1lv\nsapdata2lv\nsapdata3lv\nsapdata4lv\nboardlv\noriglogAlv\noriglogBlv\nmirrlogAlv\nmirrlogBlv'
+epprd_rg:process_resources(4.699)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.699)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.701)[logredo_volume_groups:2789] lsfs -qc saplv
+epprd_rg:process_resources(4.702)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.702)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.702)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/saplv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.704)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.708)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.708)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.708)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.708)[logredo_volume_groups:2795] fsdb saplv
+epprd_rg:process_resources(4.709)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.712)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.714)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.715)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.715)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.720)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.720)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.720)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.720)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.720)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.722)[logredo_volume_groups:2789] lsfs -qc sapmntlv
+epprd_rg:process_resources(4.722)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.723)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.723)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/sapmntlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.725)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.729)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.729)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.729)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.729)[logredo_volume_groups:2795] fsdb sapmntlv
+epprd_rg:process_resources(4.730)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.733)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.735)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.735)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.735)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.740)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.740)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.740)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.740)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.740)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.742)[logredo_volume_groups:2789] lsfs -qc oraclelv
+epprd_rg:process_resources(4.743)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.743)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.743)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/oraclelv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.745)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.749)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.749)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.749)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.749)[logredo_volume_groups:2795] fsdb oraclelv
+epprd_rg:process_resources(4.750)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.753)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.755)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.756)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.756)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.761)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.761)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.761)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.761)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.761)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.763)[logredo_volume_groups:2789] lsfs -qc epplv
+epprd_rg:process_resources(4.763)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.763)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.764)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/epplv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.765)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.769)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.769)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.769)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.769)[logredo_volume_groups:2795] fsdb epplv
+epprd_rg:process_resources(4.770)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.774)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.776)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.776)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.776)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.781)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.781)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.781)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.781)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.781)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.783)[logredo_volume_groups:2789] lsfs -qc oraarchlv
+epprd_rg:process_resources(4.783)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.784)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.784)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/oraarchlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.786)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.790)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.790)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.790)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.790)[logredo_volume_groups:2795] fsdb oraarchlv
+epprd_rg:process_resources(4.791)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.794)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.796)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.796)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.797)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.801)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.802)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.802)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.802)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.802)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.804)[logredo_volume_groups:2789] lsfs -qc sapdata1lv
+epprd_rg:process_resources(4.804)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.804)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.805)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/sapdata1lv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.806)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.810)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.810)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.810)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.810)[logredo_volume_groups:2795] fsdb sapdata1lv
+epprd_rg:process_resources(4.811)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.815)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.817)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.817)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.817)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.822)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.822)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.822)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.822)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.822)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.824)[logredo_volume_groups:2789] lsfs -qc sapdata2lv
+epprd_rg:process_resources(4.825)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.825)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.825)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/sapdata2lv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.827)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.831)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.831)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.831)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.831)[logredo_volume_groups:2795] fsdb sapdata2lv
+epprd_rg:process_resources(4.832)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.835)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.837)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.837)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.838)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.842)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.843)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.843)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.843)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.843)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2789] lsfs -qc sapdata3lv
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.845)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.846)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/sapdata3lv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.847)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.851)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.851)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.851)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.851)[logredo_volume_groups:2795] fsdb sapdata3lv
+epprd_rg:process_resources(4.852)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.855)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.857)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.858)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.858)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.863)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.863)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.863)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.863)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.863)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.865)[logredo_volume_groups:2789] lsfs -qc sapdata4lv
+epprd_rg:process_resources(4.865)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.865)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.866)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/sapdata4lv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.867)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.871)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.871)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.871)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.871)[logredo_volume_groups:2795] fsdb sapdata4lv
+epprd_rg:process_resources(4.872)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.875)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.877)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.878)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.878)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.883)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.883)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.883)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.883)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.883)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.885)[logredo_volume_groups:2789] lsfs -qc boardlv
+epprd_rg:process_resources(4.885)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.886)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.886)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/boardlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.887)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.891)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.891)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.891)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.892)[logredo_volume_groups:2795] fsdb boardlv
+epprd_rg:process_resources(4.893)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.896)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.898)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.898)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.898)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.903)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.903)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.903)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.903)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.903)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.905)[logredo_volume_groups:2789] lsfs -qc origlogAlv
+epprd_rg:process_resources(4.905)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.906)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.906)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/origlogAlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.908)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.911)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.911)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.911)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.912)[logredo_volume_groups:2795] fsdb origlogAlv
+epprd_rg:process_resources(4.913)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.916)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.918)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.918)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.919)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.923)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.923)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.924)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.924)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.924)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.926)[logredo_volume_groups:2789] lsfs -qc origlogBlv
+epprd_rg:process_resources(4.926)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.926)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.927)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/origlogBlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.928)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.932)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.932)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.932)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.932)[logredo_volume_groups:2795] fsdb origlogBlv
+epprd_rg:process_resources(4.933)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.936)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.938)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.939)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.939)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.944)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.946)[logredo_volume_groups:2789] lsfs -qc mirrlogAlv
+epprd_rg:process_resources(4.946)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.946)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.947)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/mirrlogAlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.948)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.952)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.952)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.952)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.952)[logredo_volume_groups:2795] fsdb mirrlogAlv
+epprd_rg:process_resources(4.953)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.956)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.958)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.959)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.959)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.964)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.964)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.964)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.964)[logredo_volume_groups:2786] : When a filesystem is protected against concurrent mounting,
+epprd_rg:process_resources(4.964)[logredo_volume_groups:2787] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:process_resources(4.966)[logredo_volume_groups:2789] lsfs -qc mirrlogBlv
+epprd_rg:process_resources(4.966)[logredo_volume_groups:2789] LC_ALL=C
+epprd_rg:process_resources(4.966)[logredo_volume_groups:2789] tr : '\n'
+epprd_rg:process_resources(4.967)[logredo_volume_groups:2789] grep -w MountGuard
lsfs: No record matching '/var/hacmp/mirrlogBlv' was found in /etc/filesystems.
+epprd_rg:process_resources(4.968)[logredo_volume_groups:2789] cut '-d ' -f2
+epprd_rg:process_resources(4.972)[logredo_volume_groups:2789] MOUNTGUARD=''
+epprd_rg:process_resources(4.972)[logredo_volume_groups:2792] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:process_resources(4.972)[logredo_volume_groups:2793] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:process_resources(4.972)[logredo_volume_groups:2795] fsdb mirrlogBlv
+epprd_rg:process_resources(4.973)[logredo_volume_groups:2795] 0<< \EOF
su
q
EOF
+epprd_rg:process_resources(4.977)[logredo_volume_groups:2795] FMMOUNT_OUT=''
+epprd_rg:process_resources(4.979)[logredo_volume_groups:2799] echo ''
+epprd_rg:process_resources(4.979)[logredo_volume_groups:2799] awk '{ print $1 }'
+epprd_rg:process_resources(4.979)[logredo_volume_groups:2799] grep -w FM_MOUNT
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2799] FMMOUNT=''
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2800] [[ '' == yes ]]
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2804] [[ -n '' ]]
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2814] comm_failure=''
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2815] rc_mount=''
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2816] [[ -n '' ]]
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2851] logdevs=''
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2852] HAVE_GEO=''
+epprd_rg:process_resources(4.984)[logredo_volume_groups:2853] lslpp -l 'hageo.*'
+epprd_rg:process_resources(4.985)[logredo_volume_groups:2853] 1> /dev/null 2>& 1
+epprd_rg:process_resources(4.988)[logredo_volume_groups:2854] lslpp -l 'geoRM.*'
+epprd_rg:process_resources(4.989)[logredo_volume_groups:2854] 1> /dev/null 2>& 1
+epprd_rg:process_resources(4.992)[logredo_volume_groups:2874] pattern='jfs*log'
+epprd_rg:process_resources(4.992)[logredo_volume_groups:2876] : Any device with the type as log should be added
+epprd_rg:process_resources(4.992)[logredo_volume_groups:2882] odmget -q $'name = epprdaloglv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(4.996)[logredo_volume_groups:2882] [[ -n $'\nCuAt:\n\tname = "epprdaloglv"\n\tattribute = "type"\n\tvalue = "jfs2log"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(4.996)[logredo_volume_groups:2884] logdevs=' /dev/epprdaloglv'
+epprd_rg:process_resources(4.996)[logredo_volume_groups:2882] odmget -q $'name = saplv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(4.999)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(4.999)[logredo_volume_groups:2882] odmget -q $'name = sapmntlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.003)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.003)[logredo_volume_groups:2882] odmget -q $'name = oraclelv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.006)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.006)[logredo_volume_groups:2882] odmget -q $'name = epplv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.010)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.010)[logredo_volume_groups:2882] odmget -q $'name = oraarchlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.013)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.013)[logredo_volume_groups:2882] odmget -q $'name = sapdata1lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.017)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.017)[logredo_volume_groups:2882] odmget -q $'name = sapdata2lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.020)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.020)[logredo_volume_groups:2882] odmget -q $'name = sapdata3lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.024)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.024)[logredo_volume_groups:2882] odmget -q $'name = sapdata4lv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.027)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.027)[logredo_volume_groups:2882] odmget -q $'name = boardlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.031)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.031)[logredo_volume_groups:2882] odmget -q $'name = origlogAlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.034)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.034)[logredo_volume_groups:2882] odmget -q $'name = origlogBlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.038)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.038)[logredo_volume_groups:2882] odmget -q $'name = mirrlogAlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.041)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.042)[logredo_volume_groups:2882] odmget -q $'name = mirrlogBlv and \t\t attribute = type and \t\t value like jfs*log' CuAt
+epprd_rg:process_resources(5.045)[logredo_volume_groups:2882] [[ -n '' ]]
+epprd_rg:process_resources(5.045)[logredo_volume_groups:2889] : JFS2 file systems can have inline logs where the log LV is the same as the FS LV.
+epprd_rg:process_resources(5.045)[logredo_volume_groups:2895] odmget $'-qname = epprdaloglv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.049)[logredo_volume_groups:2895] [[ -n '' ]]
+epprd_rg:process_resources(5.049)[logredo_volume_groups:2895] odmget $'-qname = saplv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.052)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "saplv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.054)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.054)[logredo_volume_groups:2898] odmget -q 'name = saplv and attribute = label' CuAt
+epprd_rg:process_resources(5.058)[logredo_volume_groups:2898] [[ -n /usr/sap ]]
+epprd_rg:process_resources(5.060)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.060)[logredo_volume_groups:2900] grep -wp /dev/saplv /etc/filesystems
+epprd_rg:process_resources(5.065)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.065)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.065)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/saplv ]]
+epprd_rg:process_resources(5.065)[logredo_volume_groups:2895] odmget $'-qname = sapmntlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.069)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapmntlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.071)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.071)[logredo_volume_groups:2898] odmget -q 'name = sapmntlv and attribute = label' CuAt
+epprd_rg:process_resources(5.075)[logredo_volume_groups:2898] [[ -n /sapmnt ]]
+epprd_rg:process_resources(5.077)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.077)[logredo_volume_groups:2900] grep -wp /dev/sapmntlv /etc/filesystems
+epprd_rg:process_resources(5.082)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.082)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.082)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapmntlv ]]
+epprd_rg:process_resources(5.082)[logredo_volume_groups:2895] odmget $'-qname = oraclelv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.086)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "oraclelv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.088)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.088)[logredo_volume_groups:2898] odmget -q 'name = oraclelv and attribute = label' CuAt
+epprd_rg:process_resources(5.092)[logredo_volume_groups:2898] [[ -n /oracle ]]
+epprd_rg:process_resources(5.094)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.094)[logredo_volume_groups:2900] grep -wp /dev/oraclelv /etc/filesystems
+epprd_rg:process_resources(5.099)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.099)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.099)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/oraclelv ]]
+epprd_rg:process_resources(5.099)[logredo_volume_groups:2895] odmget $'-qname = epplv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.102)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "epplv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.104)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.104)[logredo_volume_groups:2898] odmget -q 'name = epplv and attribute = label' CuAt
+epprd_rg:process_resources(5.109)[logredo_volume_groups:2898] [[ -n /oracle/EPP ]]
+epprd_rg:process_resources(5.111)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.111)[logredo_volume_groups:2900] grep -wp /dev/epplv /etc/filesystems
+epprd_rg:process_resources(5.116)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.116)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.116)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/epplv ]]
+epprd_rg:process_resources(5.116)[logredo_volume_groups:2895] odmget $'-qname = oraarchlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.119)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "oraarchlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.121)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.121)[logredo_volume_groups:2898] odmget -q 'name = oraarchlv and attribute = label' CuAt
+epprd_rg:process_resources(5.126)[logredo_volume_groups:2898] [[ -n /oracle/EPP/oraarch ]]
+epprd_rg:process_resources(5.128)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.128)[logredo_volume_groups:2900] grep -wp /dev/oraarchlv /etc/filesystems
+epprd_rg:process_resources(5.133)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.133)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.133)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/oraarchlv ]]
+epprd_rg:process_resources(5.133)[logredo_volume_groups:2895] odmget $'-qname = sapdata1lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.136)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata1lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.138)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.138)[logredo_volume_groups:2898] odmget -q 'name = sapdata1lv and attribute = label' CuAt
+epprd_rg:process_resources(5.143)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata1 ]]
+epprd_rg:process_resources(5.145)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.145)[logredo_volume_groups:2900] grep -wp /dev/sapdata1lv /etc/filesystems
+epprd_rg:process_resources(5.150)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.150)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.150)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata1lv ]]
+epprd_rg:process_resources(5.150)[logredo_volume_groups:2895] odmget $'-qname = sapdata2lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.153)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata2lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.155)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.155)[logredo_volume_groups:2898] odmget -q 'name = sapdata2lv and attribute = label' CuAt
+epprd_rg:process_resources(5.159)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata2 ]]
+epprd_rg:process_resources(5.162)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.162)[logredo_volume_groups:2900] grep -wp /dev/sapdata2lv /etc/filesystems
+epprd_rg:process_resources(5.167)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.167)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.167)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata2lv ]]
+epprd_rg:process_resources(5.167)[logredo_volume_groups:2895] odmget $'-qname = sapdata3lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.170)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata3lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.172)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.172)[logredo_volume_groups:2898] odmget -q 'name = sapdata3lv and attribute = label' CuAt
+epprd_rg:process_resources(5.176)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata3 ]]
+epprd_rg:process_resources(5.178)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.178)[logredo_volume_groups:2900] grep -wp /dev/sapdata3lv /etc/filesystems
+epprd_rg:process_resources(5.183)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata3lv ]]
+epprd_rg:process_resources(5.184)[logredo_volume_groups:2895] odmget $'-qname = sapdata4lv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.187)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "sapdata4lv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.189)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.189)[logredo_volume_groups:2898] odmget -q 'name = sapdata4lv and attribute = label' CuAt
+epprd_rg:process_resources(5.193)[logredo_volume_groups:2898] [[ -n /oracle/EPP/sapdata4 ]]
+epprd_rg:process_resources(5.195)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.195)[logredo_volume_groups:2900] grep -wp /dev/sapdata4lv /etc/filesystems
+epprd_rg:process_resources(5.200)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.200)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.200)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/sapdata4lv ]]
+epprd_rg:process_resources(5.201)[logredo_volume_groups:2895] odmget $'-qname = boardlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.204)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "boardlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.206)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.206)[logredo_volume_groups:2898] odmget -q 'name = boardlv and attribute = label' CuAt
+epprd_rg:process_resources(5.210)[logredo_volume_groups:2898] [[ -n /board_org ]]
+epprd_rg:process_resources(5.212)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.212)[logredo_volume_groups:2900] grep -wp /dev/boardlv /etc/filesystems
+epprd_rg:process_resources(5.217)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.217)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.217)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/boardlv ]]
+epprd_rg:process_resources(5.218)[logredo_volume_groups:2895] odmget $'-qname = origlogAlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.221)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "origlogAlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.223)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.223)[logredo_volume_groups:2898] odmget -q 'name = origlogAlv and attribute = label' CuAt
+epprd_rg:process_resources(5.227)[logredo_volume_groups:2898] [[ -n /oracle/EPP/origlogA ]]
+epprd_rg:process_resources(5.229)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.229)[logredo_volume_groups:2900] grep -wp /dev/origlogAlv /etc/filesystems
+epprd_rg:process_resources(5.234)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.234)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.234)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/origlogAlv ]]
+epprd_rg:process_resources(5.235)[logredo_volume_groups:2895] odmget $'-qname = origlogBlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.238)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "origlogBlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.240)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.240)[logredo_volume_groups:2898] odmget -q 'name = origlogBlv and attribute = label' CuAt
+epprd_rg:process_resources(5.244)[logredo_volume_groups:2898] [[ -n /oracle/EPP/origlogB ]]
+epprd_rg:process_resources(5.246)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.246)[logredo_volume_groups:2900] grep -wp /dev/origlogBlv /etc/filesystems
+epprd_rg:process_resources(5.251)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.251)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.251)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/origlogBlv ]]
+epprd_rg:process_resources(5.251)[logredo_volume_groups:2895] odmget $'-qname = mirrlogAlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.255)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "mirrlogAlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.257)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.257)[logredo_volume_groups:2898] odmget -q 'name = mirrlogAlv and attribute = label' CuAt
+epprd_rg:process_resources(5.261)[logredo_volume_groups:2898] [[ -n /oracle/EPP/mirrlogA ]]
+epprd_rg:process_resources(5.264)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.264)[logredo_volume_groups:2900] grep -wp /dev/mirrlogAlv /etc/filesystems
+epprd_rg:process_resources(5.269)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.269)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.269)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/mirrlogAlv ]]
+epprd_rg:process_resources(5.269)[logredo_volume_groups:2895] odmget $'-qname = mirrlogBlv and \t\t attribute = type and \t\t value = jfs2' CuAt
+epprd_rg:process_resources(5.272)[logredo_volume_groups:2895] [[ -n $'\nCuAt:\n\tname = "mirrlogBlv"\n\tattribute = "type"\n\tvalue = "jfs2"\n\ttype = "R"\n\tgeneric = "DU"\n\trep = "s"\n\tnls_index = 639' ]]
+epprd_rg:process_resources(5.274)[logredo_volume_groups:2898] sed -n '/value =/s/^.*"\(.*\)".*/\1/p'
+epprd_rg:process_resources(5.274)[logredo_volume_groups:2898] odmget -q 'name = mirrlogBlv and attribute = label' CuAt
+epprd_rg:process_resources(5.278)[logredo_volume_groups:2898] [[ -n /oracle/EPP/mirrlogB ]]
+epprd_rg:process_resources(5.280)[logredo_volume_groups:2900] awk '$1 ~ /log/ {printf $3}'
+epprd_rg:process_resources(5.281)[logredo_volume_groups:2900] grep -wp /dev/mirrlogBlv /etc/filesystems
+epprd_rg:process_resources(5.285)[logredo_volume_groups:2900] LOG=/dev/epprdaloglv
+epprd_rg:process_resources(5.286)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == INLINE ]]
+epprd_rg:process_resources(5.286)[logredo_volume_groups:2901] [[ /dev/epprdaloglv == /dev/mirrlogBlv ]]
+epprd_rg:process_resources(5.286)[logredo_volume_groups:2910] : Remove any duplicates acquired so far
+epprd_rg:process_resources(5.288)[logredo_volume_groups:2912] echo /dev/epprdaloglv
+epprd_rg:process_resources(5.288)[logredo_volume_groups:2912] tr ' ' '\n'
+epprd_rg:process_resources(5.289)[logredo_volume_groups:2912] sort -u
+epprd_rg:process_resources(5.295)[logredo_volume_groups:2912] logdevs=/dev/epprdaloglv
+epprd_rg:process_resources(5.295)[logredo_volume_groups:2915] : Run logredos in parallel to save time.
+epprd_rg:process_resources(5.295)[logredo_volume_groups:2919] [[ -n '' ]]
+epprd_rg:process_resources(5.295)[logredo_volume_groups:2944] : Run logredo only if the LV is closed.
+epprd_rg:process_resources(5.295)[logredo_volume_groups:2946] awk '$1 ~ /^epprdaloglv$/ && $6 ~ /closed\// {print "CLOSED"}' /var/hacmp/log/.process_resources_logredo.28836342
+epprd_rg:process_resources(5.300)[logredo_volume_groups:2946] [[ -n CLOSED ]]
+epprd_rg:process_resources(5.300)[logredo_volume_groups:2949] : Run logredo only if filesystem is not mounted on any of the node in the cluster.
+epprd_rg:process_resources(5.300)[logredo_volume_groups:2951] [[ -z '' ]]
+epprd_rg:process_resources(5.301)[logredo_volume_groups:2958] rm -f /var/hacmp/log/.process_resources_logredo.28836342
+epprd_rg:process_resources(5.301)[logredo_volume_groups:2953] logredo /dev/epprdaloglv
+epprd_rg:process_resources(5.305)[logredo_volume_groups:2962] : Wait for the background logredos from the RGs
+epprd_rg:process_resources(5.305)[logredo_volume_groups:2964] wait
J2_LOGREDO:log redo processing for /dev/epprdaloglv
+epprd_rg:process_resources(5.312)[logredo_volume_groups:2966] return 0
+epprd_rg:process_resources(5.312)[3324] true
+epprd_rg:process_resources(5.312)[3326] : call rgpa, and it will tell us what to do next
+epprd_rg:process_resources(5.312)[3328] set -a
+epprd_rg:process_resources(5.312)[3329] clRGPA
+epprd_rg:clRGPA[+47] [[ high = high ]]
+epprd_rg:clRGPA[+47] version=1.3 $Source: 61haes_r711 43haes/usr/sbin/cluster/clresmgrd/utils/clRGPA.sh 1$
+epprd_rg:clRGPA[+49] usingVer=clrgpa
+epprd_rg:clRGPA[+54] clrgpa
2023-01-28T18:03:44.328564 clrgpa
+epprd_rg:clRGPA[+55] exit 0
+epprd_rg:process_resources(5.331)[3329] eval JOB_TYPE=FILESYSTEMS ACTION=ACQUIRE FILE_SYSTEMS='"/board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap"' RESOURCE_GROUPS='"epprd_rg' '"' FSCHECK_TOOLS='"fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck"' RECOVERY_METHODS='"sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential"'
+epprd_rg:process_resources(5.331)[1] JOB_TYPE=FILESYSTEMS
+epprd_rg:process_resources(5.331)[1] ACTION=ACQUIRE
+epprd_rg:process_resources(5.331)[1] FILE_SYSTEMS=/board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap
+epprd_rg:process_resources(5.331)[1] RESOURCE_GROUPS='epprd_rg '
+epprd_rg:process_resources(5.331)[1] FSCHECK_TOOLS=fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck
+epprd_rg:process_resources(5.331)[1] RECOVERY_METHODS=sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:process_resources(5.331)[3330] RC=0
+epprd_rg:process_resources(5.331)[3331] set +a
+epprd_rg:process_resources(5.331)[3333] (( 0 != 0 ))
+epprd_rg:process_resources(5.331)[3342] RESOURCE_GROUPS=epprd_rg
+epprd_rg:process_resources(5.331)[3343] GROUPNAME=epprd_rg
+epprd_rg:process_resources(5.331)[3343] export GROUPNAME
+epprd_rg:process_resources(5.331)[3353] IS_SERVICE_START=1
+epprd_rg:process_resources(5.331)[3354] IS_SERVICE_STOP=1
+epprd_rg:process_resources(5.331)[3360] [[ FILESYSTEMS == RELEASE ]]
+epprd_rg:process_resources(5.331)[3360] [[ FILESYSTEMS == ONLINE ]]
+epprd_rg:process_resources(5.331)[3482] process_file_systems ACQUIRE
+epprd_rg:process_resources(5.331)[process_file_systems:2640] PS4_FUNC=process_file_systems
+epprd_rg:process_resources(5.331)[process_file_systems:2640] typeset PS4_FUNC
+epprd_rg:process_resources(5.331)[process_file_systems:2641] [[ high == high ]]
+epprd_rg:process_resources(5.331)[process_file_systems:2641] set -x
+epprd_rg:process_resources(5.331)[process_file_systems:2643] STAT=0
+epprd_rg:process_resources(5.331)[process_file_systems:2645] [[ ACQUIRE == ACQUIRE ]]
+epprd_rg:process_resources(5.331)[process_file_systems:2647] cl_activate_fs
+epprd_rg:cl_activate_fs[819] version=1.1.8.5
+epprd_rg:cl_activate_fs[823] : Check for mounting OEM file systems
+epprd_rg:cl_activate_fs[825] OEM_FS=false
+epprd_rg:cl_activate_fs[826] (( 0 != 0 ))
+epprd_rg:cl_activate_fs[832] STATUS=0
+epprd_rg:cl_activate_fs[832] typeset -li STATUS
+epprd_rg:cl_activate_fs[833] EMULATE=REAL
+epprd_rg:cl_activate_fs[836] : The environment variable MOUNT_WLMCNTRL_SELFMANAGE is referred inside mount.
+epprd_rg:cl_activate_fs[837] : If this variable is set, few calls to wlmcntrl are skipped inside mount, which
+epprd_rg:cl_activate_fs[838] : offers performance benefits. Hence we will export this variable if it is set
+epprd_rg:cl_activate_fs[839] : in /etc/environment.
+epprd_rg:cl_activate_fs[841] grep -w ^MOUNT_WLMCNTRL_SELFMANAGE /etc/environment
+epprd_rg:cl_activate_fs[841] export eval
+epprd_rg:cl_activate_fs[843] [[ -n FILESYSTEMS ]]
+epprd_rg:cl_activate_fs[843] [[ FILESYSTEMS != GROUP ]]
+epprd_rg:cl_activate_fs[846] : If JOB_TYPE is set, and it does not equal to GROUP, then
+epprd_rg:cl_activate_fs[847] : we are processing for process_resources, which passes requests
+epprd_rg:cl_activate_fs[848] : associaed with multiple resource groups through environment variables
+epprd_rg:cl_activate_fs[850] activate_fs_process_resources
+epprd_rg:cl_activate_fs[activate_fs_process_resources:716] [[ high == high ]]
+epprd_rg:cl_activate_fs[activate_fs_process_resources:716] set -x
+epprd_rg:cl_activate_fs[activate_fs_process_resources:718] ERRSTATUS=0
+epprd_rg:cl_activate_fs[activate_fs_process_resources:718] typeset -i ERRSTATUS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:719] RC=0
+epprd_rg:cl_activate_fs[activate_fs_process_resources:719] typeset -li RC
+epprd_rg:cl_activate_fs[activate_fs_process_resources:742] export GROUPNAME
+epprd_rg:cl_activate_fs[activate_fs_process_resources:745] : Get the file systems, recovery tool and procedure for this
+epprd_rg:cl_activate_fs[activate_fs_process_resources:746] : resource group
+epprd_rg:cl_activate_fs[activate_fs_process_resources:748] print /board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap
+epprd_rg:cl_activate_fs[activate_fs_process_resources:748] read _RG_FILE_SYSTEMS FILE_SYSTEMS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:748] IFS=:
+epprd_rg:cl_activate_fs[activate_fs_process_resources:749] print fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck
+epprd_rg:cl_activate_fs[activate_fs_process_resources:749] read _RG_FSCHECK_TOOLS FSCHECK_TOOLS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:749] IFS=:
+epprd_rg:cl_activate_fs[activate_fs_process_resources:750] print sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:cl_activate_fs[activate_fs_process_resources:750] read _RG_RECOVERY_METHODS RECOVERY_METHODS
+epprd_rg:cl_activate_fs[activate_fs_process_resources:750] IFS=:
+epprd_rg:cl_activate_fs[activate_fs_process_resources:753] : Since all file systems in a resource group use the same recovery
+epprd_rg:cl_activate_fs[activate_fs_process_resources:754] : method and recovery means, just pick up the first one in the list
+epprd_rg:cl_activate_fs[activate_fs_process_resources:756] print fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck,fsck
+epprd_rg:cl_activate_fs[activate_fs_process_resources:756] read FSCHECK_TOOL rest
+epprd_rg:cl_activate_fs[activate_fs_process_resources:756] IFS=,
+epprd_rg:cl_activate_fs[activate_fs_process_resources:757] print sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential,sequential
+epprd_rg:cl_activate_fs[activate_fs_process_resources:757] read RECOVERY_METHOD rest
+epprd_rg:cl_activate_fs[activate_fs_process_resources:757] IFS=,
+epprd_rg:cl_activate_fs[activate_fs_process_resources:760] : If there are any unmounted file systems for this resource group, go
+epprd_rg:cl_activate_fs[activate_fs_process_resources:761] : recover and mount them.
+epprd_rg:cl_activate_fs[activate_fs_process_resources:763] [[ -n /board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap ]]
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] IFS=,
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] set -- /board_org,/oracle,/oracle/EPP,/oracle/EPP/mirrlogA,/oracle/EPP/mirrlogB,/oracle/EPP/oraarch,/oracle/EPP/origlogA,/oracle/EPP/origlogB,/oracle/EPP/sapdata1,/oracle/EPP/sapdata2,/oracle/EPP/sapdata3,/oracle/EPP/sapdata4,/sapmnt,/usr/sap
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] print /board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap
+epprd_rg:cl_activate_fs[activate_fs_process_resources:765] RG_FILE_SYSTEMS='/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_resources:766] activate_fs_process_group sequential fsck '/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_group:362] PS4_LOOP=''
+epprd_rg:cl_activate_fs[activate_fs_process_group:362] typeset PS4_LOOP
+epprd_rg:cl_activate_fs[activate_fs_process_group:363] [[ high == high ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:363] set -x
+epprd_rg:cl_activate_fs[activate_fs_process_group:365] typeset RECOVERY_METHOD FSCHECK_TOOL FILESYSTEMS
+epprd_rg:cl_activate_fs[activate_fs_process_group:366] STATUS=0
+epprd_rg:cl_activate_fs[activate_fs_process_group:366] typeset -i STATUS
+epprd_rg:cl_activate_fs[activate_fs_process_group:368] RECOVERY_METHOD=sequential
+epprd_rg:cl_activate_fs[activate_fs_process_group:369] FSCHECK_TOOL=fsck
+epprd_rg:cl_activate_fs[activate_fs_process_group:370] shift 2
+epprd_rg:cl_activate_fs[activate_fs_process_group:371] FILESYSTEMS='/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_group:372] comm_failure=''
+epprd_rg:cl_activate_fs[activate_fs_process_group:372] typeset comm_failure
+epprd_rg:cl_activate_fs[activate_fs_process_group:373] rc_mount=''
+epprd_rg:cl_activate_fs[activate_fs_process_group:373] typeset rc_mount
+epprd_rg:cl_activate_fs[activate_fs_process_group:376] : Filter out duplicates, and file systems which are already mounted
+epprd_rg:cl_activate_fs[activate_fs_process_group:378] mounts_to_do '/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[mounts_to_do:283] tomount='/board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap'
+epprd_rg:cl_activate_fs[mounts_to_do:283] typeset tomount
+epprd_rg:cl_activate_fs[mounts_to_do:286] : Get most current list of mounted filesystems
+epprd_rg:cl_activate_fs[mounts_to_do:288] mount
+epprd_rg:cl_activate_fs[mounts_to_do:288] 2> /dev/null
+epprd_rg:cl_activate_fs[mounts_to_do:288] awk '$3 ~ /jfs2*$/ {print $2}'
+epprd_rg:cl_activate_fs[mounts_to_do:288] paste -s -
+epprd_rg:cl_activate_fs[mounts_to_do:288] mounted=$'/\t/usr\t/var\t/tmp\t/home\t/admin\t/opt\t/var/adm/ras/livedump\t/ptf'
+epprd_rg:cl_activate_fs[mounts_to_do:288] typeset mounted
+epprd_rg:cl_activate_fs[mounts_to_do:291] shift
+epprd_rg:cl_activate_fs[mounts_to_do:294] typeset -A mountedArray tomountArray
+epprd_rg:cl_activate_fs[mounts_to_do:295] typeset fs
+epprd_rg:cl_activate_fs[mounts_to_do:298] : Create an associative array for each list, which
+epprd_rg:cl_activate_fs[mounts_to_do:299] : has the side effect of dropping any duplicates
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/usr]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/var]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/tmp]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/home]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/admin]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/opt]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/var/adm/ras/livedump]=1
+epprd_rg:cl_activate_fs[mounts_to_do:302] mountedArray[/ptf]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/board_org]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/mirrlogA]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/mirrlogB]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/oraarch]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/origlogA]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/origlogB]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata1]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata2]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata3]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/oracle/EPP/sapdata4]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/sapmnt]=1
+epprd_rg:cl_activate_fs[mounts_to_do:306] tomountArray[/usr/sap]=1
+epprd_rg:cl_activate_fs[mounts_to_do:310] mounted=''
+epprd_rg:cl_activate_fs[mounts_to_do:311] tomount=''
+epprd_rg:cl_activate_fs[mounts_to_do:314] : expand fs from all tomountArray subscript names
+epprd_rg:cl_activate_fs[mounts_to_do:316] set +u
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:319] [[ '' == 1 ]]
+epprd_rg:cl_activate_fs[mounts_to_do:329] : Print all subscript names which are all remaining mount
+epprd_rg:cl_activate_fs[mounts_to_do:330] : points which have to be mounted
+epprd_rg:cl_activate_fs[mounts_to_do:332] print /board_org /oracle /oracle/EPP /oracle/EPP/mirrlogA /oracle/EPP/mirrlogB /oracle/EPP/oraarch /oracle/EPP/origlogA /oracle/EPP/origlogB /oracle/EPP/sapdata1 /oracle/EPP/sapdata2 /oracle/EPP/sapdata3 /oracle/EPP/sapdata4 /sapmnt /usr/sap
+epprd_rg:cl_activate_fs[mounts_to_do:332] tr ' ' '\n'
+epprd_rg:cl_activate_fs[mounts_to_do:332] sort -u
+epprd_rg:cl_activate_fs[mounts_to_do:334] set -u
+epprd_rg:cl_activate_fs[activate_fs_process_group:378] FILESYSTEMS=$'/board_org\n/oracle\n/oracle/EPP\n/oracle/EPP/mirrlogA\n/oracle/EPP/mirrlogB\n/oracle/EPP/oraarch\n/oracle/EPP/origlogA\n/oracle/EPP/origlogB\n/oracle/EPP/sapdata1\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata4\n/sapmnt\n/usr/sap'
+epprd_rg:cl_activate_fs[activate_fs_process_group:379] [[ -z $'/board_org\n/oracle\n/oracle/EPP\n/oracle/EPP/mirrlogA\n/oracle/EPP/mirrlogB\n/oracle/EPP/oraarch\n/oracle/EPP/origlogA\n/oracle/EPP/origlogB\n/oracle/EPP/sapdata1\n/oracle/EPP/sapdata2\n/oracle/EPP/sapdata3\n/oracle/EPP/sapdata4\n/sapmnt\n/usr/sap' ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:385] : Get unique temporary file names by using the resource group and the
+epprd_rg:cl_activate_fs[activate_fs_process_group:386] : current process ID
+epprd_rg:cl_activate_fs[activate_fs_process_group:388] [[ -z epprd_rg ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:397] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs[activate_fs_process_group:398] rm -f /tmp/epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs[activate_fs_process_group:401] : If FSCHECK_TOOL is null get from ODM
+epprd_rg:cl_activate_fs[activate_fs_process_group:403] [[ -z fsck ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:408] print fsck
+epprd_rg:cl_activate_fs[activate_fs_process_group:408] FSCHECK_TOOL=fsck
+epprd_rg:cl_activate_fs[activate_fs_process_group:409] [[ fsck != fsck ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:416] : If RECOVERY_METHOD is null get from ODM
+epprd_rg:cl_activate_fs[activate_fs_process_group:418] [[ -z sequential ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:423] print sequential
+epprd_rg:cl_activate_fs[activate_fs_process_group:423] RECOVERY_METHOD=sequential
+epprd_rg:cl_activate_fs[activate_fs_process_group:424] [[ sequential != sequential ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:431] set -u
+epprd_rg:cl_activate_fs[activate_fs_process_group:434] : If FSCHECK_TOOL is set to logredo, the logredo for each jfslog has
+epprd_rg:cl_activate_fs[activate_fs_process_group:435] : already been done in get_disk_vg_fs, so we only need to do fsck check
+epprd_rg:cl_activate_fs[activate_fs_process_group:436] : and recovery here before going on to do the mounts
+epprd_rg:cl_activate_fs[activate_fs_process_group:438] [[ fsck == fsck ]]
+epprd_rg:cl_activate_fs[activate_fs_process_group:441] TOOL='/usr/sbin/fsck -f -p -o nologredo'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:445] PS4_LOOP=/board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:446] lsfs /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:446] grep -w /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:449] : Verify if any of the file system /board_org is already mounted anywhere
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] lsfs -qc /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:463] fsdb /board_org
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/board_org\n\nFile System Size:\t\t10485032\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t16384\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000009ffd28\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t91\n[10] s_agsize:\t\t0x00004000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x0013ffa5\n \t\t s_fsckpxd.address:\t1310629\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'boardl\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000000b5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t181\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d880\t[52] last unmounted:\t0x63d4e41f\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/board_org\n\nFile System Size:\t\t10485032\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t16384\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000009ffd28\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t91\n[10] s_agsize:\t\t0x00004000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x0013ffa5\n \t\t s_fsckpxd.address:\t1310629\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'boardl\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000000b5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t181\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d880\t[52] last unmounted:\t0x63d4e41f\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/board_org[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/boardlv
The current volume is: /dev/boardlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:445] PS4_LOOP=/oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:446] lsfs /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:446] grep -w /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:449] : Verify if any of the file system /oracle is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] lsfs -qc /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:463] fsdb /oracle
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle\n\nFile System Size:\t\t41941352\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000027ff968\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t211\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x004fff2d\n \t\t s_fsckpxd.address:\t5242669\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oracle\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5819\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000295\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t661\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41f\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle\n\nFile System Size:\t\t41941352\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000027ff968\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t211\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x004fff2d\n \t\t s_fsckpxd.address:\t5242669\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oracle\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5819\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000295\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t661\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41f\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/oraclelv
The current volume is: /dev/oraclelv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:446] lsfs /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:446] grep -w /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] lsfs -qc /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:463] fsdb /oracle/EPP
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP\n\nFile System Size:\t\t62912232\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x0000000003bff6e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t291\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x0077fedd\n \t\t s_fsckpxd.address:\t7864029\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'epplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5824\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000003d5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t981\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41f\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP\n\nFile System Size:\t\t62912232\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t65536\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x0000000003bff6e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t291\n[10] s_agsize:\t\t0x00010000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x0077fedd\n \t\t s_fsckpxd.address:\t7864029\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'epplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5824\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000003d5\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t981\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41f\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/epplv
The current volume is: /dev/epplv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:446] lsfs /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:446] grep -w /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/mirrlogA is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] lsfs -qc /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:463] fsdb /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/mirrlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5834\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/mirrlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5834\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogA[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/mirrlogAlv
The current volume is: /dev/mirrlogAlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:446] lsfs /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:446] grep -w /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/mirrlogB is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] lsfs -qc /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:463] fsdb /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/mirrlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5835\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/mirrlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'mirrlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5835\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d881\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/mirrlogB[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/mirrlogBlv
The current volume is: /dev/mirrlogBlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:446] lsfs /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:446] grep -w /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/oraarch is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] lsfs -qc /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:463] fsdb /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/oraarch\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oraarc\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582e\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d882\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/oraarch\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'oraarc\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582e\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d882\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/oraarch[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/oraarchlv
The current volume is: /dev/oraarchlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:446] lsfs /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:446] grep -w /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/origlogA is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] lsfs -qc /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:463] fsdb /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/origlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d882\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/origlogA\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5832\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d882\t[52] last unmounted:\t0x63d4e41e\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogA[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/origlogAlv
The current volume is: /dev/origlogAlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:446] lsfs /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:446] grep -w /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/origlogB is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] lsfs -qc /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:463] fsdb /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/origlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5833\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d882\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/origlogB\n\nFile System Size:\t\t10482792\t(512 byte blocks)\nAggregate Block Size:\t\t512\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t400\n[3] s_size:\t0x00000000009ff468\t[20] s_bsize:\t\t512\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t9\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t0\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t2968\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x009ff468\n \t\t s_fsckpxd.address:\t10482792\n \t\t[28] s_ait.len:\t\t32\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x00000058\n J2_MOUNTGUARD \t\t s_ait.address:\t88\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'origlo\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5833\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t32\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x000028b0\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t10416\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d882\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/origlogB[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/origlogBlv
The current volume is: /dev/origlogBlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata1 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata1\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582f\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata1\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d582f\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata1[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata1lv
The current volume is: /dev/sapdata1lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata2 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata2\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5830\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata2\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5830\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata2[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata2lv
The current volume is: /dev/sapdata2lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata3 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata3
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata3\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata3\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata3[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata3lv
The current volume is: /dev/sapdata3lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:445] PS4_LOOP=/oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:446] lsfs /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:446] grep -w /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:449] : Verify if any of the file system /oracle/EPP/sapdata4 is already mounted anywhere
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] lsfs -qc /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:463] fsdb /oracle/EPP/sapdata4
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/oracle/EPP/sapdata4\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/oracle/EPP/sapdata4\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapdat\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5831\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d883\t[52] last unmounted:\t0x63d4e41d\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/oracle/EPP/sapdata4[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapdata4lv
The current volume is: /dev/sapdata4lv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:445] PS4_LOOP=/sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:446] lsfs /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:446] grep -w /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:449] : Verify if any of the file system /sapmnt is already mounted anywhere
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] lsfs -qc /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:463] fsdb /sapmnt
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/sapmnt\n\nFile System Size:\t\t20970472\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t32768\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000013ffbe8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t131\n[10] s_agsize:\t\t0x00008000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x0027ff7d\n \t\t s_fsckpxd.address:\t2621309\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapmnt\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5818\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000155\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t341\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d884\t[52] last unmounted:\t0x63d4e41c\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/sapmnt\n\nFile System Size:\t\t20970472\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t32768\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000013ffbe8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t131\n[10] s_agsize:\t\t0x00008000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x0027ff7d\n \t\t s_fsckpxd.address:\t2621309\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'sapmnt\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5818\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000155\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t341\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d884\t[52] last unmounted:\t0x63d4e41c\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/sapmnt[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/sapmntlv
The current volume is: /dev/sapmntlv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:445] PS4_LOOP=/usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:446] lsfs /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:446] grep -w /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:446] read DEV rest
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:449] : Verify if any of the file system /usr/sap is already mounted anywhere
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:450] : else in the cluster. If it is already mounted somewhere else,
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:451] : we dont want to continue here to avoid data corruption.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:454] : When a filesystem is protected against concurrent mounting,
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:455] : MountGuard flag is set and lsfs command displays characteristics of file systems.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] lsfs -qc /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] LC_ALL=C
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] tr : '\n'
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] grep -w MountGuard
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] cut '-d ' -f2
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:457] MOUNTGUARD='yes)'
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:460] : fsdb and its subcommands allow us to view the information in a file system.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:461] : The FM_MOUNT flag is set if the file system is mounted cleanly on any node.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:463] fsdb /usr/sap
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:463] 0<< \EOF
su
q
EOF
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:463] FMMOUNT_OUT=$'\nFile System:\t\t\t/usr/sap\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'saplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5815\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d884\t[52] last unmounted:\t0x63d4e41c\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] echo $'\nFile System:\t\t\t/usr/sap\n\nFile System Size:\t\t104853992\t(512 byte blocks)\nAggregate Block Size:\t\t4096\nAllocation Group Size:\t\t131072\t(aggregate blocks)\n\n> [1] s_magic:\t\t\'J2FS\'\t\t[18] s_fscklog:\t\t1\n[2] s_version:\t\t2\t\t[19] s_fsckloglen:\t50\n[3] s_size:\t0x00000000063ff1e8\t[20] s_bsize:\t\t4096\n[4] s_logdev:\t0x8000003300000001\t[21] s_logserial:\t0x00000002\n[5] s_l2bsize:\t\t12\t\t[22] s_logpxd.len:\t0\n[6] s_l2bfactor:\t3\t\t[23] s_logpxd.addr1:\t0x00\n[7] s_pbsize:\t\t512\t\t[24] s_logpxd.addr2:\t0x00000000\n[8] s_l2pbsize:\t\t9\t\t s_logpxd.address:\t0\n[9] s_devbsize:\t\t512\t\t[25] s_fsckpxd.len:\t451\n[10] s_agsize:\t\t0x00020000\t[26] s_fsckpxd.addr1:\t0x00\n[11] s_flag:\t\t0x02000100\t[27] s_fsckpxd.addr2:\t0x00c7fe3d\n \t\t s_fsckpxd.address:\t13106749\n \t\t[28] s_ait.len:\t\t4\n J2_GROUPCOMMIT \t\t[29] s_ait.addr1:\t0x00\n \t\t[30] s_ait.addr2:\t0x0000000b\n J2_MOUNTGUARD \t\t s_ait.address:\t11\n[12] s_state:\t\t0x00000000\t[31] s_fpack:\t\t\'saplv\'\n FM_CLEAN \t[32] s_fname:\t\t\'\'\n[13] s_time.tj_sec: 0x00000000639d5815\t[33] s_time.tj_nsec:\t0x00000000\n[14] s_ait2.len:\t4\t\t[34] s_xfsckpxd.len:\t0\n[15] s_ait2.addr1:\t0x00\t\t[35] s_xfsckpxd.addr1:\t0x00\n[16] s_ait2.addr2:\t0x00000656\t[36] s_xfsckpxd.addr2:\t0x00000000\n s_ait2.address:\t1622\t\t s_xfsckpxd.address:\t0\n[17] s_xsize: 0x0000000000000000\t[37] s_xlogpxd.len:\t0\n[40] feature_compat: 0x0000000000000005 [38] s_xlogpxd.addr1:\t0x00\n[41] feature_rdonly: 0x0000000000000000 [39] s_xlogpxd.addr2:\t0x00000000\n[42] feature_incompat: 0x0000000000000000 s_xlogpxd.address:\t0\n[43-49] <...snapshot info...>\t\t[50] s_maxext:\t0x00000000\n s_state_ts[8]:\n[51] last mounted:\t0x63d4d884\t[52] last unmounted:\t0x63d4e41c\n[53] last marked dirty:\t0x00000000\t[54] last recovered:\t0x00000000\n[55] last size change:\t0x00000000\t[56] unused timestamp:\t0x00000000\n[57] unused timestamp:\t0x00000000\t[58] unused timestamp:\t0x00000000\n[59] s_szchng:\t\t0x00000000\t[60] s_origAGSZ:\t0x00000000\n[61] s_origSZ:\t0x0000000000000000\ndisplay_super: [m]odify, [s]napshot info or e[x]it: > '
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] grep -w FM_MOUNT
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] awk '{ print $1 }'
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:467] FMMOUNT=''
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:468] fsck_check=''
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:469] [[ 'yes)' == yes ]]
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:473] [[ -n '' ]]
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:503] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:508] /usr/sbin/fsck -f -p -o nologredo /dev/saplv
The current volume is: /dev/saplv
Primary superblock is valid.
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:513] : Allow any backgrounded fsck operations to finish
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:515] wait
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:519] : Now attempt to mount all the file systems
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:521] ALLFS=All_filesystems
+epprd_rg:cl_activate_fs:/usr/sap[activate_fs_process_group:522] cl_RMupdate resource_acquiring All_filesystems cl_activate_fs
2023-01-28T18:03:45.108553
2023-01-28T18:03:45.112901
+epprd_rg:cl_activate_fs(0.773):/usr/sap[activate_fs_process_group:524] PS4_TIMER=true
+epprd_rg:cl_activate_fs(0.773):/usr/sap[activate_fs_process_group:524] typeset PS4_TIMER
+epprd_rg:cl_activate_fs(0.773):/board_org[activate_fs_process_group:527] PS4_LOOP=/board_org
+epprd_rg:cl_activate_fs(0.773):/board_org[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(0.773):/board_org[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(0.773):/board_org[activate_fs_process_group:540] fs_mount /board_org fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:69] FS=/board_org
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:81] : Here check to see if the information in /etc/filesystems for /board_org
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(0.773):/board_org[fs_mount:86] lsfs -c /board_org
+epprd_rg:cl_activate_fs(0.774):/board_org[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(0.779):/board_org[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(0.775):/board_org[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(0.779):/board_org[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(0.779):/board_org[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(0.780):/board_org[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(0.775):/board_org[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/board_org:/dev/boardlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(0.781):/board_org[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(0.782):/board_org[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(0.782):/board_org[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:100] LV_name=boardlv
+epprd_rg:cl_activate_fs(0.783):/board_org[fs_mount:101] getlvcb -T -A boardlv
+epprd_rg:cl_activate_fs(0.784):/board_org[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(0.802):/board_org[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(0.784):/board_org[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.11 \n\t lvname = boardlv \n\t label = /board_org \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:47 2022\n \t time modified = Sat Jan 28 17:10:40 2023\n '
+epprd_rg:cl_activate_fs(0.802):/board_org[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(0.802):/board_org[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(0.803):/board_org[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(0.784):/board_org[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.11 \n\t lvname = boardlv \n\t label = /board_org \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:47 2022\n \t time modified = Sat Jan 28 17:10:40 2023\n '
+epprd_rg:cl_activate_fs(0.804):/board_org[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(0.805):/board_org[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(0.806):/board_org[fs_mount:115] clodmget -q 'name = boardlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:115] CuAt_label=/board_org
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:118] : At this point, if things are working correctly, /board_org from /etc/filesystems
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:119] : should match /board_org from CuAt ODM and /board_org from the LVCB
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:123] [[ /board_org != /board_org ]]
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:128] [[ /board_org != /board_org ]]
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(0.810):/board_org[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(0.830):/board_org[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(0.830):/board_org[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(0.830):/board_org[fs_mount:160] amlog_trace '' 'Activating Filesystem|/board_org'
+epprd_rg:cl_activate_fs(0.830):/board_org[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(0.830):/board_org[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(0.855):/board_org[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(0.857):/board_org[amlog_trace:319] DATE=2023-01-28T18:03:45.197781
+epprd_rg:cl_activate_fs(0.858):/board_org[amlog_trace:320] echo '|2023-01-28T18:03:45.197781|INFO: Activating Filesystem|/board_org'
+epprd_rg:cl_activate_fs(0.858):/board_org[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(0.858):/board_org[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(0.860):/board_org[fs_mount:162] : Try to mount filesystem /board_org at Jan 28 18:03:45.000
+epprd_rg:cl_activate_fs(0.860):/board_org[fs_mount:163] mount /board_org
+epprd_rg:cl_activate_fs(0.872):/board_org[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(0.872):/board_org[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(0.872):/board_org[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(0.872):/board_org[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/board_org'
+epprd_rg:cl_activate_fs(0.872):/board_org[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(0.873):/board_org[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(0.897):/board_org[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(0.900):/board_org[amlog_trace:319] DATE=2023-01-28T18:03:45.240177
+epprd_rg:cl_activate_fs(0.900):/board_org[amlog_trace:320] echo '|2023-01-28T18:03:45.240177|INFO: Activating Filesystems completed|/board_org'
+epprd_rg:cl_activate_fs(0.900):/board_org[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(0.900):/board_org[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(0.901):/board_org[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(0.902):/board_org[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(0.784):/board_org[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.11 \n\t lvname = boardlv \n\t label = /board_org \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:47 2022\n \t time modified = Sat Jan 28 17:10:40 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(0.904):/board_org[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(0.904):/oracle[activate_fs_process_group:527] PS4_LOOP=/oracle
+epprd_rg:cl_activate_fs(0.904):/oracle[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(0.904):/oracle[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(0.904):/oracle[activate_fs_process_group:540] fs_mount /oracle fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:69] FS=/oracle
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(0.904):/oracle[fs_mount:86] lsfs -c /oracle
+epprd_rg:cl_activate_fs(0.905):/oracle[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(0.910):/oracle[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(0.906):/oracle[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle:/dev/oraclelv:jfs2:::41943040:rw:no:no'
+epprd_rg:cl_activate_fs(0.911):/oracle[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(0.911):/oracle[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(0.912):/oracle[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(0.906):/oracle[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle:/dev/oraclelv:jfs2:::41943040:rw:no:no'
+epprd_rg:cl_activate_fs(0.913):/oracle[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(0.914):/oracle[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(0.914):/oracle[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(0.914):/oracle[fs_mount:100] LV_name=oraclelv
+epprd_rg:cl_activate_fs(0.914):/oracle[fs_mount:101] getlvcb -T -A oraclelv
+epprd_rg:cl_activate_fs(0.915):/oracle[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(0.933):/oracle[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(0.915):/oracle[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.4 \n\t lvname = oraclelv \n\t label = /oracle \n\t machine id = 44AF14B00 \n\t number lps = 40 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:42 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n '
+epprd_rg:cl_activate_fs(0.933):/oracle[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(0.933):/oracle[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(0.934):/oracle[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(0.915):/oracle[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.4 \n\t lvname = oraclelv \n\t label = /oracle \n\t machine id = 44AF14B00 \n\t number lps = 40 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:42 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n '
+epprd_rg:cl_activate_fs(0.935):/oracle[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(0.936):/oracle[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(0.937):/oracle[fs_mount:115] clodmget -q 'name = oraclelv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:115] CuAt_label=/oracle
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:118] : At this point, if things are working correctly, /oracle from /etc/filesystems
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:119] : should match /oracle from CuAt ODM and /oracle from the LVCB
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:123] [[ /oracle != /oracle ]]
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:128] [[ /oracle != /oracle ]]
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(0.940):/oracle[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(0.941):/oracle[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(0.960):/oracle[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(0.960):/oracle[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(0.960):/oracle[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle'
+epprd_rg:cl_activate_fs(0.960):/oracle[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(0.961):/oracle[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(0.985):/oracle[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(0.988):/oracle[amlog_trace:319] DATE=2023-01-28T18:03:45.328010
+epprd_rg:cl_activate_fs(0.988):/oracle[amlog_trace:320] echo '|2023-01-28T18:03:45.328010|INFO: Activating Filesystem|/oracle'
+epprd_rg:cl_activate_fs(0.988):/oracle[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(0.988):/oracle[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(0.990):/oracle[fs_mount:162] : Try to mount filesystem /oracle at Jan 28 18:03:45.000
+epprd_rg:cl_activate_fs(0.990):/oracle[fs_mount:163] mount /oracle
+epprd_rg:cl_activate_fs(1.002):/oracle[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.002):/oracle[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.002):/oracle[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.002):/oracle[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle'
+epprd_rg:cl_activate_fs(1.002):/oracle[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.003):/oracle[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.027):/oracle[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.029):/oracle[amlog_trace:319] DATE=2023-01-28T18:03:45.369615
+epprd_rg:cl_activate_fs(1.029):/oracle[amlog_trace:320] echo '|2023-01-28T18:03:45.369615|INFO: Activating Filesystems completed|/oracle'
+epprd_rg:cl_activate_fs(1.029):/oracle[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.029):/oracle[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.029):/oracle[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.029):/oracle[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.029):/oracle[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.029):/oracle[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.029):/oracle[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.030):/oracle[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.030):/oracle[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.030):/oracle[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.030):/oracle[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.031):/oracle[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.031):/oracle[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(0.915):/oracle[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.4 \n\t lvname = oraclelv \n\t label = /oracle \n\t machine id = 44AF14B00 \n\t number lps = 40 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:42 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.033):/oracle[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.033):/oracle/EPP[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP
+epprd_rg:cl_activate_fs(1.033):/oracle/EPP[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.033):/oracle/EPP[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.033):/oracle/EPP[activate_fs_process_group:540] fs_mount /oracle/EPP fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.033):/oracle/EPP[fs_mount:69] FS=/oracle/EPP
+epprd_rg:cl_activate_fs(1.033):/oracle/EPP[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.034):/oracle/EPP[fs_mount:86] lsfs -c /oracle/EPP
+epprd_rg:cl_activate_fs(1.035):/oracle/EPP[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.040):/oracle/EPP[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.035):/oracle/EPP[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP:/dev/epplv:jfs2:::62914560:rw:no:no'
+epprd_rg:cl_activate_fs(1.040):/oracle/EPP[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.040):/oracle/EPP[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.041):/oracle/EPP[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.035):/oracle/EPP[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP:/dev/epplv:jfs2:::62914560:rw:no:no'
+epprd_rg:cl_activate_fs(1.042):/oracle/EPP[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.042):/oracle/EPP[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.042):/oracle/EPP[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.044):/oracle/EPP[fs_mount:100] LV_name=epplv
+epprd_rg:cl_activate_fs(1.044):/oracle/EPP[fs_mount:101] getlvcb -T -A epplv
+epprd_rg:cl_activate_fs(1.045):/oracle/EPP[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.063):/oracle/EPP[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.045):/oracle/EPP[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.5 \n\t lvname = epplv \n\t label = /oracle/EPP \n\t machine id = 44AF14B00 \n\t number lps = 60 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n '
+epprd_rg:cl_activate_fs(1.063):/oracle/EPP[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.063):/oracle/EPP[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.064):/oracle/EPP[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.045):/oracle/EPP[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.5 \n\t lvname = epplv \n\t label = /oracle/EPP \n\t machine id = 44AF14B00 \n\t number lps = 60 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n '
+epprd_rg:cl_activate_fs(1.064):/oracle/EPP[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.066):/oracle/EPP[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.066):/oracle/EPP[fs_mount:115] clodmget -q 'name = epplv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:115] CuAt_label=/oracle/EPP
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP from /etc/filesystems
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:119] : should match /oracle/EPP from CuAt ODM and /oracle/EPP from the LVCB
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:123] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:128] [[ /oracle/EPP != /oracle/EPP ]]
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.070):/oracle/EPP[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.090):/oracle/EPP[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.090):/oracle/EPP[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.090):/oracle/EPP[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.090):/oracle/EPP[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.091):/oracle/EPP[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.115):/oracle/EPP[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.117):/oracle/EPP[amlog_trace:319] DATE=2023-01-28T18:03:45.457500
+epprd_rg:cl_activate_fs(1.117):/oracle/EPP[amlog_trace:320] echo '|2023-01-28T18:03:45.457500|INFO: Activating Filesystem|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.117):/oracle/EPP[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.117):/oracle/EPP[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.120):/oracle/EPP[fs_mount:162] : Try to mount filesystem /oracle/EPP at Jan 28 18:03:45.000
+epprd_rg:cl_activate_fs(1.120):/oracle/EPP[fs_mount:163] mount /oracle/EPP
+epprd_rg:cl_activate_fs(1.131):/oracle/EPP[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.132):/oracle/EPP[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.132):/oracle/EPP[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.132):/oracle/EPP[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.132):/oracle/EPP[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.132):/oracle/EPP[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.156):/oracle/EPP[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[amlog_trace:319] DATE=2023-01-28T18:03:45.499272
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[amlog_trace:320] echo '|2023-01-28T18:03:45.499272|INFO: Activating Filesystems completed|/oracle/EPP'
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.159):/oracle/EPP[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.160):/oracle/EPP[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.161):/oracle/EPP[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.045):/oracle/EPP[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.5 \n\t lvname = epplv \n\t label = /oracle/EPP \n\t machine id = 44AF14B00 \n\t number lps = 60 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[activate_fs_process_group:540] fs_mount /oracle/EPP/mirrlogA fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:69] FS=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.163):/oracle/EPP/mirrlogA[fs_mount:86] lsfs -c /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.164):/oracle/EPP/mirrlogA[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.169):/oracle/EPP/mirrlogA[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.165):/oracle/EPP/mirrlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogA:/dev/mirrlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.169):/oracle/EPP/mirrlogA[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.169):/oracle/EPP/mirrlogA[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.170):/oracle/EPP/mirrlogA[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.165):/oracle/EPP/mirrlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogA:/dev/mirrlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.171):/oracle/EPP/mirrlogA[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.172):/oracle/EPP/mirrlogA[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.172):/oracle/EPP/mirrlogA[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.173):/oracle/EPP/mirrlogA[fs_mount:100] LV_name=mirrlogAlv
+epprd_rg:cl_activate_fs(1.173):/oracle/EPP/mirrlogA[fs_mount:101] getlvcb -T -A mirrlogAlv
+epprd_rg:cl_activate_fs(1.174):/oracle/EPP/mirrlogA[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.193):/oracle/EPP/mirrlogA[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.174):/oracle/EPP/mirrlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.14 \n\t lvname = mirrlogAlv \n\t label = /oracle/EPP/mirrlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n '
+epprd_rg:cl_activate_fs(1.193):/oracle/EPP/mirrlogA[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.193):/oracle/EPP/mirrlogA[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.194):/oracle/EPP/mirrlogA[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.174):/oracle/EPP/mirrlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.14 \n\t lvname = mirrlogAlv \n\t label = /oracle/EPP/mirrlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n '
+epprd_rg:cl_activate_fs(1.195):/oracle/EPP/mirrlogA[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.195):/oracle/EPP/mirrlogA[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.197):/oracle/EPP/mirrlogA[fs_mount:115] clodmget -q 'name = mirrlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:115] CuAt_label=/oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/mirrlogA from /etc/filesystems
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:119] : should match /oracle/EPP/mirrlogA from CuAt ODM and /oracle/EPP/mirrlogA from the LVCB
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:123] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:128] [[ /oracle/EPP/mirrlogA != /oracle/EPP/mirrlogA ]]
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.200):/oracle/EPP/mirrlogA[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.220):/oracle/EPP/mirrlogA[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.220):/oracle/EPP/mirrlogA[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.220):/oracle/EPP/mirrlogA[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.220):/oracle/EPP/mirrlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.221):/oracle/EPP/mirrlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.245):/oracle/EPP/mirrlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.248):/oracle/EPP/mirrlogA[amlog_trace:319] DATE=2023-01-28T18:03:45.588097
+epprd_rg:cl_activate_fs(1.248):/oracle/EPP/mirrlogA[amlog_trace:320] echo '|2023-01-28T18:03:45.588097|INFO: Activating Filesystem|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.248):/oracle/EPP/mirrlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.248):/oracle/EPP/mirrlogA[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.250):/oracle/EPP/mirrlogA[fs_mount:162] : Try to mount filesystem /oracle/EPP/mirrlogA at Jan 28 18:03:45.000
+epprd_rg:cl_activate_fs(1.251):/oracle/EPP/mirrlogA[fs_mount:163] mount /oracle/EPP/mirrlogA
+epprd_rg:cl_activate_fs(1.262):/oracle/EPP/mirrlogA[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.262):/oracle/EPP/mirrlogA[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.262):/oracle/EPP/mirrlogA[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.262):/oracle/EPP/mirrlogA[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.262):/oracle/EPP/mirrlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.263):/oracle/EPP/mirrlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.287):/oracle/EPP/mirrlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[amlog_trace:319] DATE=2023-01-28T18:03:45.630003
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[amlog_trace:320] echo '|2023-01-28T18:03:45.630003|INFO: Activating Filesystems completed|/oracle/EPP/mirrlogA'
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.290):/oracle/EPP/mirrlogA[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.291):/oracle/EPP/mirrlogA[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.291):/oracle/EPP/mirrlogA[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.293):/oracle/EPP/mirrlogA[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.293):/oracle/EPP/mirrlogA[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogA[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogA[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogA[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogA[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogA[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogA[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.174):/oracle/EPP/mirrlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.14 \n\t lvname = mirrlogAlv \n\t label = /oracle/EPP/mirrlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Jan 28 17:10:41 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogA[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[activate_fs_process_group:540] fs_mount /oracle/EPP/mirrlogB fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:69] FS=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.294):/oracle/EPP/mirrlogB[fs_mount:86] lsfs -c /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.295):/oracle/EPP/mirrlogB[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.300):/oracle/EPP/mirrlogB[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.295):/oracle/EPP/mirrlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogB:/dev/mirrlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.300):/oracle/EPP/mirrlogB[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.300):/oracle/EPP/mirrlogB[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.301):/oracle/EPP/mirrlogB[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.295):/oracle/EPP/mirrlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/mirrlogB:/dev/mirrlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.302):/oracle/EPP/mirrlogB[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.302):/oracle/EPP/mirrlogB[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.303):/oracle/EPP/mirrlogB[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.304):/oracle/EPP/mirrlogB[fs_mount:100] LV_name=mirrlogBlv
+epprd_rg:cl_activate_fs(1.304):/oracle/EPP/mirrlogB[fs_mount:101] getlvcb -T -A mirrlogBlv
+epprd_rg:cl_activate_fs(1.305):/oracle/EPP/mirrlogB[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.322):/oracle/EPP/mirrlogB[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.305):/oracle/EPP/mirrlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.15 \n\t lvname = mirrlogBlv \n\t label = /oracle/EPP/mirrlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:50 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.322):/oracle/EPP/mirrlogB[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.322):/oracle/EPP/mirrlogB[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.323):/oracle/EPP/mirrlogB[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.305):/oracle/EPP/mirrlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.15 \n\t lvname = mirrlogBlv \n\t label = /oracle/EPP/mirrlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:50 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.324):/oracle/EPP/mirrlogB[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.325):/oracle/EPP/mirrlogB[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.326):/oracle/EPP/mirrlogB[fs_mount:115] clodmget -q 'name = mirrlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:115] CuAt_label=/oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/mirrlogB from /etc/filesystems
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:119] : should match /oracle/EPP/mirrlogB from CuAt ODM and /oracle/EPP/mirrlogB from the LVCB
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:123] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:128] [[ /oracle/EPP/mirrlogB != /oracle/EPP/mirrlogB ]]
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.330):/oracle/EPP/mirrlogB[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.350):/oracle/EPP/mirrlogB[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.350):/oracle/EPP/mirrlogB[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.350):/oracle/EPP/mirrlogB[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(1.350):/oracle/EPP/mirrlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.351):/oracle/EPP/mirrlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.374):/oracle/EPP/mirrlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.377):/oracle/EPP/mirrlogB[amlog_trace:319] DATE=2023-01-28T18:03:45.717264
+epprd_rg:cl_activate_fs(1.377):/oracle/EPP/mirrlogB[amlog_trace:320] echo '|2023-01-28T18:03:45.717264|INFO: Activating Filesystem|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(1.377):/oracle/EPP/mirrlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.377):/oracle/EPP/mirrlogB[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.380):/oracle/EPP/mirrlogB[fs_mount:162] : Try to mount filesystem /oracle/EPP/mirrlogB at Jan 28 18:03:45.000
+epprd_rg:cl_activate_fs(1.380):/oracle/EPP/mirrlogB[fs_mount:163] mount /oracle/EPP/mirrlogB
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP/mirrlogB[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP/mirrlogB[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP/mirrlogB[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP/mirrlogB[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(1.391):/oracle/EPP/mirrlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.392):/oracle/EPP/mirrlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.416):/oracle/EPP/mirrlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[amlog_trace:319] DATE=2023-01-28T18:03:45.758923
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[amlog_trace:320] echo '|2023-01-28T18:03:45.758923|INFO: Activating Filesystems completed|/oracle/EPP/mirrlogB'
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.419):/oracle/EPP/mirrlogB[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.420):/oracle/EPP/mirrlogB[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.421):/oracle/EPP/mirrlogB[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.422):/oracle/EPP/mirrlogB[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.422):/oracle/EPP/mirrlogB[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/mirrlogB[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/mirrlogB[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/mirrlogB[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/mirrlogB[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/mirrlogB[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/mirrlogB[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.305):/oracle/EPP/mirrlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.15 \n\t lvname = mirrlogBlv \n\t label = /oracle/EPP/mirrlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:50 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/mirrlogB[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[activate_fs_process_group:540] fs_mount /oracle/EPP/oraarch fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:69] FS=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.423):/oracle/EPP/oraarch[fs_mount:86] lsfs -c /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(1.424):/oracle/EPP/oraarch[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.429):/oracle/EPP/oraarch[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.424):/oracle/EPP/oraarch[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/oraarch:/dev/oraarchlv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(1.429):/oracle/EPP/oraarch[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.429):/oracle/EPP/oraarch[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.430):/oracle/EPP/oraarch[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.424):/oracle/EPP/oraarch[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/oraarch:/dev/oraarchlv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(1.431):/oracle/EPP/oraarch[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.432):/oracle/EPP/oraarch[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.432):/oracle/EPP/oraarch[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.433):/oracle/EPP/oraarch[fs_mount:100] LV_name=oraarchlv
+epprd_rg:cl_activate_fs(1.433):/oracle/EPP/oraarch[fs_mount:101] getlvcb -T -A oraarchlv
+epprd_rg:cl_activate_fs(1.434):/oracle/EPP/oraarch[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.452):/oracle/EPP/oraarch[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.434):/oracle/EPP/oraarch[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.6 \n\t lvname = oraarchlv \n\t label = /oracle/EPP/oraarch \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.452):/oracle/EPP/oraarch[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.452):/oracle/EPP/oraarch[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.454):/oracle/EPP/oraarch[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.454):/oracle/EPP/oraarch[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.434):/oracle/EPP/oraarch[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.6 \n\t lvname = oraarchlv \n\t label = /oracle/EPP/oraarch \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.455):/oracle/EPP/oraarch[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.455):/oracle/EPP/oraarch[fs_mount:115] clodmget -q 'name = oraarchlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:115] CuAt_label=/oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/oraarch from /etc/filesystems
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:119] : should match /oracle/EPP/oraarch from CuAt ODM and /oracle/EPP/oraarch from the LVCB
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:123] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:128] [[ /oracle/EPP/oraarch != /oracle/EPP/oraarch ]]
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.459):/oracle/EPP/oraarch[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.479):/oracle/EPP/oraarch[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.479):/oracle/EPP/oraarch[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.479):/oracle/EPP/oraarch[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(1.479):/oracle/EPP/oraarch[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.480):/oracle/EPP/oraarch[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.503):/oracle/EPP/oraarch[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.506):/oracle/EPP/oraarch[amlog_trace:319] DATE=2023-01-28T18:03:45.846329
+epprd_rg:cl_activate_fs(1.506):/oracle/EPP/oraarch[amlog_trace:320] echo '|2023-01-28T18:03:45.846329|INFO: Activating Filesystem|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(1.506):/oracle/EPP/oraarch[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.506):/oracle/EPP/oraarch[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.509):/oracle/EPP/oraarch[fs_mount:162] : Try to mount filesystem /oracle/EPP/oraarch at Jan 28 18:03:45.000
+epprd_rg:cl_activate_fs(1.509):/oracle/EPP/oraarch[fs_mount:163] mount /oracle/EPP/oraarch
+epprd_rg:cl_activate_fs(1.520):/oracle/EPP/oraarch[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.520):/oracle/EPP/oraarch[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.520):/oracle/EPP/oraarch[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.520):/oracle/EPP/oraarch[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(1.520):/oracle/EPP/oraarch[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.521):/oracle/EPP/oraarch[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.545):/oracle/EPP/oraarch[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[amlog_trace:319] DATE=2023-01-28T18:03:45.888187
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[amlog_trace:320] echo '|2023-01-28T18:03:45.888187|INFO: Activating Filesystems completed|/oracle/EPP/oraarch'
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.548):/oracle/EPP/oraarch[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.549):/oracle/EPP/oraarch[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.550):/oracle/EPP/oraarch[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.434):/oracle/EPP/oraarch[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.6 \n\t lvname = oraarchlv \n\t label = /oracle/EPP/oraarch \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:43 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/oraarch[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[activate_fs_process_group:540] fs_mount /oracle/EPP/origlogA fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:69] FS=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.552):/oracle/EPP/origlogA[fs_mount:86] lsfs -c /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(1.553):/oracle/EPP/origlogA[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.558):/oracle/EPP/origlogA[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.554):/oracle/EPP/origlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogA:/dev/origlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.558):/oracle/EPP/origlogA[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.558):/oracle/EPP/origlogA[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.559):/oracle/EPP/origlogA[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.554):/oracle/EPP/origlogA[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogA:/dev/origlogAlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.560):/oracle/EPP/origlogA[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.561):/oracle/EPP/origlogA[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.561):/oracle/EPP/origlogA[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.562):/oracle/EPP/origlogA[fs_mount:100] LV_name=origlogAlv
+epprd_rg:cl_activate_fs(1.562):/oracle/EPP/origlogA[fs_mount:101] getlvcb -T -A origlogAlv
+epprd_rg:cl_activate_fs(1.563):/oracle/EPP/origlogA[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.581):/oracle/EPP/origlogA[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.563):/oracle/EPP/origlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.12 \n\t lvname = origlogAlv \n\t label = /oracle/EPP/origlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:48 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.581):/oracle/EPP/origlogA[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.581):/oracle/EPP/origlogA[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.582):/oracle/EPP/origlogA[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.563):/oracle/EPP/origlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.12 \n\t lvname = origlogAlv \n\t label = /oracle/EPP/origlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:48 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.583):/oracle/EPP/origlogA[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.584):/oracle/EPP/origlogA[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.585):/oracle/EPP/origlogA[fs_mount:115] clodmget -q 'name = origlogAlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.588):/oracle/EPP/origlogA[fs_mount:115] CuAt_label=/oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(1.588):/oracle/EPP/origlogA[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/origlogA from /etc/filesystems
+epprd_rg:cl_activate_fs(1.588):/oracle/EPP/origlogA[fs_mount:119] : should match /oracle/EPP/origlogA from CuAt ODM and /oracle/EPP/origlogA from the LVCB
+epprd_rg:cl_activate_fs(1.588):/oracle/EPP/origlogA[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.588):/oracle/EPP/origlogA[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.588):/oracle/EPP/origlogA[fs_mount:123] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:cl_activate_fs(1.589):/oracle/EPP/origlogA[fs_mount:128] [[ /oracle/EPP/origlogA != /oracle/EPP/origlogA ]]
+epprd_rg:cl_activate_fs(1.589):/oracle/EPP/origlogA[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.589):/oracle/EPP/origlogA[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.589):/oracle/EPP/origlogA[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.608):/oracle/EPP/origlogA[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.608):/oracle/EPP/origlogA[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.608):/oracle/EPP/origlogA[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(1.608):/oracle/EPP/origlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.609):/oracle/EPP/origlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.633):/oracle/EPP/origlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.636):/oracle/EPP/origlogA[amlog_trace:319] DATE=2023-01-28T18:03:45.976012
+epprd_rg:cl_activate_fs(1.636):/oracle/EPP/origlogA[amlog_trace:320] echo '|2023-01-28T18:03:45.976012|INFO: Activating Filesystem|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(1.636):/oracle/EPP/origlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.636):/oracle/EPP/origlogA[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.638):/oracle/EPP/origlogA[fs_mount:162] : Try to mount filesystem /oracle/EPP/origlogA at Jan 28 18:03:45.000
+epprd_rg:cl_activate_fs(1.638):/oracle/EPP/origlogA[fs_mount:163] mount /oracle/EPP/origlogA
+epprd_rg:cl_activate_fs(1.650):/oracle/EPP/origlogA[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.650):/oracle/EPP/origlogA[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.650):/oracle/EPP/origlogA[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.650):/oracle/EPP/origlogA[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(1.650):/oracle/EPP/origlogA[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.651):/oracle/EPP/origlogA[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.674):/oracle/EPP/origlogA[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[amlog_trace:319] DATE=2023-01-28T18:03:46.017404
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[amlog_trace:320] echo '|2023-01-28T18:03:46.017404|INFO: Activating Filesystems completed|/oracle/EPP/origlogA'
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.677):/oracle/EPP/origlogA[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.678):/oracle/EPP/origlogA[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.679):/oracle/EPP/origlogA[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.563):/oracle/EPP/origlogA[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.12 \n\t lvname = origlogAlv \n\t label = /oracle/EPP/origlogA \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:48 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogA[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[activate_fs_process_group:540] fs_mount /oracle/EPP/origlogB fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:69] FS=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.681):/oracle/EPP/origlogB[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.682):/oracle/EPP/origlogB[fs_mount:86] lsfs -c /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(1.683):/oracle/EPP/origlogB[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.687):/oracle/EPP/origlogB[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.683):/oracle/EPP/origlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogB:/dev/origlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.687):/oracle/EPP/origlogB[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.687):/oracle/EPP/origlogB[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.688):/oracle/EPP/origlogB[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.683):/oracle/EPP/origlogB[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/origlogB:/dev/origlogBlv:jfs2:::10485760:rw:no:no'
+epprd_rg:cl_activate_fs(1.689):/oracle/EPP/origlogB[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/origlogB[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.690):/oracle/EPP/origlogB[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.691):/oracle/EPP/origlogB[fs_mount:100] LV_name=origlogBlv
+epprd_rg:cl_activate_fs(1.691):/oracle/EPP/origlogB[fs_mount:101] getlvcb -T -A origlogBlv
+epprd_rg:cl_activate_fs(1.692):/oracle/EPP/origlogB[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.710):/oracle/EPP/origlogB[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.692):/oracle/EPP/origlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.13 \n\t lvname = origlogBlv \n\t label = /oracle/EPP/origlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.710):/oracle/EPP/origlogB[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.710):/oracle/EPP/origlogB[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.711):/oracle/EPP/origlogB[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.692):/oracle/EPP/origlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.13 \n\t lvname = origlogBlv \n\t label = /oracle/EPP/origlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n '
+epprd_rg:cl_activate_fs(1.712):/oracle/EPP/origlogB[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.713):/oracle/EPP/origlogB[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.714):/oracle/EPP/origlogB[fs_mount:115] clodmget -q 'name = origlogBlv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:115] CuAt_label=/oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/origlogB from /etc/filesystems
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:119] : should match /oracle/EPP/origlogB from CuAt ODM and /oracle/EPP/origlogB from the LVCB
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:123] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:128] [[ /oracle/EPP/origlogB != /oracle/EPP/origlogB ]]
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.718):/oracle/EPP/origlogB[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.738):/oracle/EPP/origlogB[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.738):/oracle/EPP/origlogB[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.738):/oracle/EPP/origlogB[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(1.738):/oracle/EPP/origlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.739):/oracle/EPP/origlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.763):/oracle/EPP/origlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.766):/oracle/EPP/origlogB[amlog_trace:319] DATE=2023-01-28T18:03:46.105842
+epprd_rg:cl_activate_fs(1.766):/oracle/EPP/origlogB[amlog_trace:320] echo '|2023-01-28T18:03:46.105842|INFO: Activating Filesystem|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(1.766):/oracle/EPP/origlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.766):/oracle/EPP/origlogB[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.768):/oracle/EPP/origlogB[fs_mount:162] : Try to mount filesystem /oracle/EPP/origlogB at Jan 28 18:03:46.000
+epprd_rg:cl_activate_fs(1.768):/oracle/EPP/origlogB[fs_mount:163] mount /oracle/EPP/origlogB
+epprd_rg:cl_activate_fs(1.779):/oracle/EPP/origlogB[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/origlogB[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/origlogB[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/origlogB[fs_mount:223] amlog_trace '' 'Activating Filesystems completed|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/origlogB[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.780):/oracle/EPP/origlogB[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.804):/oracle/EPP/origlogB[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[amlog_trace:319] DATE=2023-01-28T18:03:46.147450
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[amlog_trace:320] echo '|2023-01-28T18:03:46.147450|INFO: Activating Filesystems completed|/oracle/EPP/origlogB'
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:224] [[ jfs2 == jfs2 ]]
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:226] : Each of the V, R, M and F fields are padded to fixed length,
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:227] : to allow reliable comparisons. E.g., maximum VRMF is
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:228] : 99.99.999.999
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:230] typeset -li V R M F
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:231] typeset -Z2 R
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:232] typeset -Z3 M
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:233] typeset -Z3 F
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:234] VRMF=0
+epprd_rg:cl_activate_fs(1.807):/oracle/EPP/origlogB[fs_mount:234] typeset -li VRMF
+epprd_rg:cl_activate_fs(1.808):/oracle/EPP/origlogB[fs_mount:236] lslpp -lcqOr bos.rte.filesystem
+epprd_rg:cl_activate_fs(1.809):/oracle/EPP/origlogB[fs_mount:236] cut -f3 -d:
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:236] read V R M F
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:236] IFS=.
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:237] VRMF=702005102
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:240] (( 7 == 6 && 702005102 >= 601007000 ))
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:241] (( 7 == 7 && 702005102 >= 701001000 ))
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:244] : Tell JFS2 to try to protect against double mounts via fs mountguard
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:245] : the setting would cause VG timestamp change so run once
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:247] [[ $'+epprd_rg:cl_activate_fs(1.692):/oracle/EPP/origlogB[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.13 \n\t lvname = origlogBlv \n\t label = /oracle/EPP/origlogB \n\t machine id = 44AF14B00 \n\t number lps = 10 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:49 2022\n \t time modified = Sat Jan 28 17:10:42 2023\n ' != *mountguard=yes* ]]
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/origlogB[fs_mount:255] return 0
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[activate_fs_process_group:527] PS4_LOOP=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[activate_fs_process_group:528] [[ sequential == parallel ]]
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[activate_fs_process_group:538] : Call fs_mount function in foreground for serial recovery
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[activate_fs_process_group:540] fs_mount /oracle/EPP/sapdata1 fsck epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[fs_mount:69] FS=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[fs_mount:69] typeset FS
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[fs_mount:70] TOOL=fsck
+epprd_rg:cl_activate_fs(1.811):/oracle/EPP/sapdata1[fs_mount:70] typeset TOOL
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:71] TMP_FILENAME=epprd_rg_activate_fs.tmp27918684
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:71] typeset TMP_FILENAME
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:72] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:72] typeset WPAR_ROOT
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:73] MOUNT_ARGS=''
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:73] typeset MOUNT_ARGS
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:74] STATUS=0
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:74] typeset -i STATUS
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:75] typeset LVCB_info
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:76] typeset FS_info
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:77] typeset LV_name
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:78] RC=0
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:78] typeset -i RC
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:81] : Here check to see if the information in /etc/filesystems for /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:82] : is consistent with what is in CuAt ODM for the logical volume:
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:83] : the label field for the logical volume should match the mount
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:84] : point in /etc/filesystems.
+epprd_rg:cl_activate_fs(1.812):/oracle/EPP/sapdata1[fs_mount:86] lsfs -c /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(1.813):/oracle/EPP/sapdata1[fs_mount:86] 2>& 1
+epprd_rg:cl_activate_fs(1.818):/oracle/EPP/sapdata1[fs_mount:86] FS_info=$'+epprd_rg:cl_activate_fs(1.813):/oracle/EPP/sapdata1[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata1:/dev/sapdata1lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(1.818):/oracle/EPP/sapdata1[fs_mount:87] RC=0
+epprd_rg:cl_activate_fs(1.818):/oracle/EPP/sapdata1[fs_mount:88] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.819):/oracle/EPP/sapdata1[fs_mount:99] print -- $'+epprd_rg:cl_activate_fs(1.813):/oracle/EPP/sapdata1[fs_mount:86] LC_ALL=C\n#MountPoint:Device:Vfs:Nodename:Type:Size:Options:AutoMount:Acct\n/oracle/EPP/sapdata1:/dev/sapdata1lv:jfs2:::104857600:rw:no:no'
+epprd_rg:cl_activate_fs(1.819):/oracle/EPP/sapdata1[fs_mount:99] tail -1
+epprd_rg:cl_activate_fs(1.820):/oracle/EPP/sapdata1[fs_mount:99] read skip LV_dev_name vfs_type rest
+epprd_rg:cl_activate_fs(1.821):/oracle/EPP/sapdata1[fs_mount:99] IFS=:
+epprd_rg:cl_activate_fs(1.822):/oracle/EPP/sapdata1[fs_mount:100] LV_name=sapdata1lv
+epprd_rg:cl_activate_fs(1.822):/oracle/EPP/sapdata1[fs_mount:101] getlvcb -T -A sapdata1lv
+epprd_rg:cl_activate_fs(1.823):/oracle/EPP/sapdata1[fs_mount:101] 2>& 1
+epprd_rg:cl_activate_fs(1.840):/oracle/EPP/sapdata1[fs_mount:101] LVCB_info=$'+epprd_rg:cl_activate_fs(1.823):/oracle/EPP/sapdata1[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.7 \n\t lvname = sapdata1lv \n\t label = /oracle/EPP/sapdata1 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:44 2022\n \t time modified = Sat Jan 28 17:10:43 2023\n '
+epprd_rg:cl_activate_fs(1.840):/oracle/EPP/sapdata1[fs_mount:102] RC=0
+epprd_rg:cl_activate_fs(1.840):/oracle/EPP/sapdata1[fs_mount:103] (( 0 != 0 ))
+epprd_rg:cl_activate_fs(1.841):/oracle/EPP/sapdata1[fs_mount:114] print -- $'+epprd_rg:cl_activate_fs(1.823):/oracle/EPP/sapdata1[fs_mount:101] LC_ALL=C\n\t AIX LVCB\n\t intrapolicy = c \n\t copies = 1 \n\t interpolicy = x \n\t lvid = 00c44af100004b00000001851e9dc053.7 \n\t lvname = sapdata1lv \n\t label = /oracle/EPP/sapdata1 \n\t machine id = 44AF14B00 \n\t number lps = 100 \n\t relocatable = y \n\t strict = y \n\t stripe width = 0 \n\t stripe size in exponent = 0 \n\t type = jfs2 \n\t upperbound = 1024 \n\t fs = vfs=jfs2:log=/dev/epprdaloglv:account=false:mountguard=yes \n\t time created = Sat Dec 17 14:46:44 2022\n \t time modified = Sat Jan 28 17:10:43 2023\n '
+epprd_rg:cl_activate_fs(1.842):/oracle/EPP/sapdata1[fs_mount:114] grep -w 'label ='
+epprd_rg:cl_activate_fs(1.843):/oracle/EPP/sapdata1[fs_mount:114] read skip skip LVCB_label
+epprd_rg:cl_activate_fs(1.844):/oracle/EPP/sapdata1[fs_mount:115] clodmget -q 'name = sapdata1lv and attribute = label' -f value -n CuAt
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:115] CuAt_label=/oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:118] : At this point, if things are working correctly, /oracle/EPP/sapdata1 from /etc/filesystems
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:119] : should match /oracle/EPP/sapdata1 from CuAt ODM and /oracle/EPP/sapdata1 from the LVCB
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:120] : on disk. No recovery is done at this point, because best efforts at recovery
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:121] : were done in clvaryonvg.
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:123] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:128] [[ /oracle/EPP/sapdata1 != /oracle/EPP/sapdata1 ]]
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:133] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:143] [[ -n epprd_rg ]]
+epprd_rg:cl_activate_fs(1.848):/oracle/EPP/sapdata1[fs_mount:143] clwparroot epprd_rg
+epprd_rg:clwparroot[42] [[ high == high ]]
+epprd_rg:clwparroot[42] version=1.1
+epprd_rg:clwparroot[44] . /usr/es/sbin/cluster/wpar/wpar_utils
+epprd_rg:clwparroot[11] . /usr/es/sbin/cluster/wpar/wpar_common_funcs
+epprd_rg:clwparroot[26] [[ high == high ]]
+epprd_rg:clwparroot[26] set -x
+epprd_rg:clwparroot[27] [[ high == high ]]
+epprd_rg:clwparroot[27] version='1.6 $Source: 61haes_r711 43haes/usr/sbin/cluster/wpar/wpar_common_funcs.sh 1$'
+epprd_rg:clwparroot[29] PATH=/usr/bin:/usr/sbin:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/events:/usr/es/sbin/cluster/events/utils:/usr/es/sbin/cluster/events/cmd:/usr/es/sbin/cluster/diag:/usr/es/sbin/cluster/etc:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc:/usr/es/sbin/cluster/conversion:/usr/es/sbin/cluster/glvm/utils:/usr/es/sbin/cluster/wpar:/usr/es/sbin/cluster/sa/sbin:/usr/lib/cluster:/opt/freeware/bin:/usr/es/sbin/cluster/clanalyze:/etc:/usr/ucb:/usr/bin/X11:/sbin:/bin:/usr/bin:/usr/sbin
+epprd_rg:clwparroot[30] export PATH
+epprd_rg:clwparroot[32] typeset usageErr invalArgErr internalErr
+epprd_rg:clwparroot[34] usageErr=10
+epprd_rg:clwparroot[35] invalArgErr=11
+epprd_rg:clwparroot[36] internalErr=12
+epprd_rg:clwparroot[46] rgName=epprd_rg
+epprd_rg:clwparroot[49] uname
+epprd_rg:clwparroot[49] OSNAME=AIX
+epprd_rg:clwparroot[51] [[ AIX == *AIX* ]]
+epprd_rg:clwparroot[52] lslpp -l bos.wpars
+epprd_rg:clwparroot[52] 1> /dev/null 2>& 1
+epprd_rg:clwparroot[54] loadWparName epprd_rg
+epprd_rg:clwparroot[loadWparName:1484] [[ 1 =~ 1 ]]
+epprd_rg:clwparroot[loadWparName:1490] clodmget -q 'name = WPAR_NAME' -f value -n HACMPresource
+epprd_rg:clwparroot[loadWparName:1490] [[ -z '' ]]
+epprd_rg:clwparroot[loadWparName:1490] return 0
+epprd_rg:clwparroot[54] wparName=''
+epprd_rg:clwparroot[55] (( 0 != 0 ))
+epprd_rg:clwparroot[55] [[ -z '' ]]
+epprd_rg:clwparroot[57] exit 0
+epprd_rg:cl_activate_fs(1.868):/oracle/EPP/sapdata1[fs_mount:143] WPAR_ROOT=''
+epprd_rg:cl_activate_fs(1.868):/oracle/EPP/sapdata1[fs_mount:144] [[ -n '' ]]
+epprd_rg:cl_activate_fs(1.868):/oracle/EPP/sapdata1[fs_mount:160] amlog_trace '' 'Activating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_activate_fs(1.868):/oracle/EPP/sapdata1[amlog_trace:318] clcycle clavailability.log
+epprd_rg:cl_activate_fs(1.869):/oracle/EPP/sapdata1[amlog_trace:318] 1> /dev/null 2>& 1
+epprd_rg:cl_activate_fs(1.893):/oracle/EPP/sapdata1[amlog_trace:319] cltime
+epprd_rg:cl_activate_fs(1.895):/oracle/EPP/sapdata1[amlog_trace:319] DATE=2023-01-28T18:03:46.235703
+epprd_rg:cl_activate_fs(1.895):/oracle/EPP/sapdata1[amlog_trace:320] echo '|2023-01-28T18:03:46.235703|INFO: Activating Filesystem|/oracle/EPP/sapdata1'
+epprd_rg:cl_activate_fs(1.895):/oracle/EPP/sapdata1[amlog_trace:320] 1>> /var/hacmp/availability/clavailability.log
+epprd_rg:cl_activate_fs(1.896):/oracle/EPP/sapdata1[fs_mount:162] date '+%h %d %H:%M:%S.000'
+epprd_rg:cl_activate_fs(1.898):/oracle/EPP/sapdata1[fs_mount:162] : Try to mount filesystem /oracle/EPP/sapdata1 at Jan 28 18:03:46.000
+epprd_rg:cl_activate_fs(1.898):/oracle/EPP/sapdata1[fs_mount:163] mount /oracle/EPP/sapdata1
+epprd_rg:cl_activate_fs(1.910):/oracle/EPP/sapdata1[fs_mount:209] (( 0 == 1 ))
+epprd_rg:cl_activate_fs(1.910):/oracle/EPP/sapdata1[fs_mount:219] : On successful mount of a JFS2 file system, engage mountguard,
+epprd_rg:cl_activate_fs(1.910):/oracle/EPP/sapdata1[fs_mount:220] : if we are running on an AIX level that suppors it
+epprd_rg:cl_activate_fs(1.910):/oracle/EPP/sapdata1[fs_mount:223] amlog_trace '