ISO8859-1 ! "    + $&*K 0v  $ $  !,B o|$, <)Y- #, <!I"a#!v$!%'&!''(,*?+,-'..%/%T0)z1'26'7185992o:%;)<'=>*?;@ KdXef:g.hijQ]klnFmTn o` p]!QqO"rs"sR#stI#uj&vX'{w'xd)y]*zL+}{s+|R,>},~.mS0*0~ 2dM3r)3>34)4G4c45G6_6`67L8:9c9<9F:4H:{H:J; ;X< =>g>m>?h@+@#BNBC9CIEFOGXHIaIJJLQ'MTNKN\NOLOPQ@QRRScStSSSSST T)TGTbT|2sTKUwURdUeV/xVWW X Y KZ1 FZ} Z[Y>[b]2K]Q]P^3L^^_u2`h`6=`W`sa5ab/1b$b !c"!&cD"#ck#3c$c%d&eI'e(bf)Pg%*-gv+Zg,g-|h.hi/Ti0i1=jg2j(;kkekOld:lJl6m:7mq Mm m On :ne Kn'nKo:o`9o+o:p2 Cp<<pBp<qEq==q9q8q Br4 >rw /r @r -s'< BsU;sAs;tDtR@t9t8u AuK =u /u ?u ,v;F Cvh<vBv<w,EwiAw9w8x+ Bxd >x /x @y -yWP yyZ,Gy-y/z@zO@zAzQ{P{e 9{ ;{ ?|, F|l 8|:|>}'E}f@}A}=~/?~mC~J~<<>yBIDEEUT&8{ :!>"E.#;t$8%:&7$'?\(<)>*;+9T,6n  %4 D Sxecba<c-%: -/ #]  G q0X8O-@-a\KK  V5c+52Jw.gyhZ$W< iR X A Wkly>Re@`na 8!< "F#L$kM%&s')h*]+8,x<-._5/q01o2M3dV4~5:6]7n*8l9d:wk;y<x]=_>_6?o@JAFQBjCjDunGDHE)IoJ'PGQRkSWTLUHVBWaXQ YO\ZDAdd SAP Global Filesystem DetailsSAP Global File System Owning NodeTakeover NodesService IP LabelShared Volume GroupsFilesystems/Directories to Export (NFSv2/3)Filesystems/Directories to NFS MountChange/Show SAP Global File System DetailsSelect Specific Configuration You Wish To CreateSelect an Application instanceAdd SAP Application Instance DetailsSAP Application Server Instance NameApplication NameSAP SCS/ASCS Instance(s) Name(s)Change/Show SAP Application Instance DetailsPrimary NodeSelect a SCS/ASCS instanceSelect ERS instance(s)Add SAP SCS/ASCS Instance(s) DetailsChange/Show SAP SCS/ASCS Instance(s) DetailsAdd SAP ERS Instance(s) DetailsSAP ERS Instance(s) Name(s)Select SAP Database instanceChange/Show SAP Database instance detailsSelect a SAP Instance Configuration AssistantSAP SYSTEM IDSAP Application Server Instance No.SAP SCS/ASCS Instance No(s).SAP ERS Instance No(s).Participating NodesVolume Group(s)Network NameDatabase Resource GroupSAP Instance ProfileSAP Instance Executable DirectorySapcontrol WaitforStarted TimeoutSapcontrol WaitforStarted Timeout DelaySapcontrol WaitforStopped TimeoutSapcontrol WaitforStopped Timeout DelayENSA/ERS Sync time************* SAP Globals WARNING **************** Changing these values will effect all instances **************************************************IS ERS EnabledSA SAP XPLATFORM LOGGINGEXIT CODE START sapcontrol Start failedEXIT CODE START sapcontrol StartService failedEXIT_CODE_START_sapcontrol_NFS_failedEXIT_CODE_MONITOR_sapstartsrv_unavailableEXIT_CODE_MONITOR_failover_on_gw_outageNotification LevelChange/Show SAP ERS Instance(s) DetailsChange/Show SAP AS Instance(s) attributes DetailsChange/Show SAP (A)SCS Instance(s) attributes DetailsChange/Show SAP ERS Instance(s) attributes DetailsEXIT CODE START sapcontrol NFS failedEXIT CODE MONITOR sapstartsrv unavailableEXIT CODE MONITOR failover on gw outageCS OS ConnectorERS OS ConnectorAS OS ConnectorVolume GroupAdd SAP Global Filesystems (/sapmnt/ /usr/sap/trans) details to the PowerHA SystemMirror configuration. Once configured, These filesystems will be made accessible to primary node as NFS crossmount and to all takeover nodes using NFS-mount. Please ensure the filesystems /export/sapmnt/ and /export/usr/sap/trans are mounted on primary node and empty /sapmnt/, /usr/sap/trans directories exists on all nodes. The primary node which must be a node already defined to the PowerHA SystemMirror cluster and is hosting /export/sapmnt/ and /exportusr/sap/trans filesystems. Takeover Node(s) The takeover node(s) must be an already existing PowerHA SystemMirror cluster node. Takeover nodes will participate in the resource group generated to support the SAP Global Filesyste. Please ensure that, all the nodes which are running Application, Enqueue and Enqueue replication are selected. Service IP Label Select a service IP label to make SAP Global Filesystems available using NFS. Any service IP label already defined to PowerHA SystemMirror can be choosen, or a new service IP label will be created if the label does not already exist in the PowerHA SystemMirror cluster configuration. Shared Volume Groups The default list of volume groups auto-discovered by the SAP Smart assist whichis hosting /export/sapmnt/ and /usr/sap/trans filesystems. Filesystems/Directories to Export (NFSv2/3) The filesystems which need to be exported. Please ensrure the list includes /export/sapmnt/ and /export/usr/sap/trans. Modify an existing PowerHA SystemMirror defined SAP Global File System details. Please select an application instance which needs to be configured under PowerHA SystemMirror from the auto generated list Please ensure that the application instaces uses a virtual IP and a shared disk to host it's instance directory (usually found at /usr/sap//) and also it can be varied on on all the nodes which are selected as primary and take over nodes from the next smit screen Add SAP Application Instance Details to the PowerHA SystemMirror cluster configuration. Please ensure that the shared volume group discovered can be varied on all the nodes selected as primary and take over nodes. If the virtual IP discovered is not already configured under PowerHA SystemMirror Smart Assist for SAP will automatically configure it as a service IP. SAP Application instance name which is selected prior to entering this SMIT screen. Application Name An PowerHA SystemMirror cluster wide unique name. The application name is a container for the PowerHA SystemMirror resource groups, application servers, and other cluster components generated to support the Oracle Application Server or RDBMS Instance. The application name must conform to the same rules for naming PowerHA SystemMirror resource groups. The name can be no longer than 32 characters, and must only contain alphanumerics and underline (_) characters. Primary Node The node where the selected Application instance is presently found to be running. Takeover Node(s) The takeover node(s) must be an already existing PowerHA SystemMirror cluster node. Takeover nodes will participate in the resource group generated to support the selected Application instance. Please ensure that, these nodes can varyon the shared disks which are hosting filesystems of instance directory and setup the virtual IP Service IP The auto discovered IP in use by the selected application instance. Shared Volume Groups The auto discovered volume group which is hosting instance directory for application instance Modify an existing PowerHA SystemMirror defined SAP Application instance details. Please select a SCS/ASCS instance(s) which needs to be configured under PowerHA SystemMirror from the auto generated list. Please ensure that both Java SCS and ABAP SCS instances (if exists) runs on the same machine as PowerHA SystemMirror will manage both these SAP instances in a single resource group. Also ensure that the SCS/ASCS instaces uses a virtual IP and a shared disk to host it's instance directories (usually found at /usr/sap//) and also it can be varied on on all the nodes which are selected as primary and take over nodes from the next smit screen Add SAP SCS/ASCS Instance Details to the PowerHA SystemMirror cluster configuration. Please ensure that the shared volume group discovered can be varied on all the nodes selected as primary and take over nodes. If the virtual IP discovered is not already configured under PowerHA SystemMirror Smart Assist for SAP will automatically configure it as a service IP SAP SCS/ASCS instance name(s) which is/are selected prior to entering this SMIT screen. Application Name An PowerHA SystemMirror cluster wide unique name. The application name is a container for the PowerHA SystemMirror resource groups, application servers, and other cluster components generated to support the Oracle Application Server or RDBMS Instance. The application name must conform to the same rules for naming PowerHA SystemMirror resource groups. The name can be no longer than 32 characters, and must only contain alphanumerics and underline (_) characters. Primary Node The node where the selected SCS/ASCS instance(s) is/are presently found to be running. Takeover Node(s) The takeover node(s) must be an already existing PowerHA SystemMirror cluster node. Takeover nodes will participate in the resource group generated to support the selected Application instance. Please ensure that, these nodes can varyon the shared disks which are hosting filesystems of instance directory and setup the virtual IP Service IP The auto discovered IP in use by the selected SCS/ASCS instance. Shared Volume Groups The auto discovered volume group which is hosting instance directory for SCS/ASCS instance(s) Modify an existing PowerHA SystemMirror defined SAP SCS/ASCS instance(s) details. Please select a ERS instance which needs to be configured under PowerHA SystemMirror from the auto generated list PowerHA SystemMirror needs SCS/ASCS instances to be configured before configuring ERS instances. Please ensure that both, ERS instance backing Java SCS and ERS instance backing ABAP SCS instance (if exists)are configured to run all participating nodes. The Participating nodes list must match both the primary and take over nodes for of SCS/ASCS configuration. Add SAP ERS Instance(s) Details to the PowerHA SystemMirror cluster configuration. PowerHA SystemMirror needs SCS/ASCS instances to be configured before configuring ERS instances. Please ensure that both, ERS instance backing Java SCS and ERS instance backing ABAP SCS instance (if exists)are configured to run all participating nodes. The Participating nodes list must match both the primary and take over nodes for of SCS/ASCS configuration. SAP ERS instance name(s) which is/are selected prior to entering this SMIT screen. Application Name An PowerHA SystemMirror cluster wide unique name. The application name is a container for the PowerHA SystemMirror resource groups, application servers, and other cluster components generated to support the Oracle Application Server or RDBMS Instance. The application name must conform to the same rules for naming PowerHA SystemMirror resource groups. The name can be no longer than 32 characters, and must only contain alphanumerics and underline (_) characters. Participating Nodes The list of nodes where ERS instance(s) are configured to run. PowerHA SystemMirror needs SCS/ASCS instances to be configured before configuring ERS instances. This list must match both the primary and take over nodes for of SCS/ASCS configuration. Modify an existing PowerHA SystemMirror defined SAP ERS instance(s) details. Please select the SAP Database instance. Change/Show already configured SAP Database instance details. Discovered SAP System ID(SID)Discovered SAP Instance No.Volume Group(s) The auto discovered volume group which is hosting instance directory for (A)SCS instance(s). Select LOCAL for Local Volume Groups. Volume Group(s) The auto discovered volume group which is hosting instance directory for ERS instance(s). Select LOCAL for Local Volume Groups. Volume Group(s) The auto discovered volume group which is hosting instance directory for AS instance(s). Select LOCAL for Local Volume Groups. Service IP The auto discovered IP in use by the selected ERS instance. Primary Node The node where the selected ERS instance(s) is/are presently found to be running. Choose the SystemMirror Network under which the Service IP label should be created. Select LOCAL option for node bound IP and Hostname configurations. Please make sure the Network is configured in all the participating nodes. Enter the name of the RG which has the Database needed for operation. The smart assist will add Start After dependency with the Database Resource group. If the dependent RG is offline then smart assist will start the Database RG prior to this SAP instance Resource Group. Change/Show an existing SAP instance specific and SAP SID specific details. SAP instance attribute are specific to each SAP instance. SAP Globals attributes are specific to each SID. Changes will reflect to all instance of this SID. Directory for the SAP executables. Local instance directories provide highest protection from NFS outages and also with a proper setup of sapcpe. Specify the wait for started delay value in seconds between 0 to 99. Specify the wait for started timeout value in seconds between 0 to 99. Specify the wait for stopped delay between in seconds between 0 to 99. Specify the wait for stopped timeout between in seconds between 0 to 99. SAP instance profile. Local instance profiles provide highest protection from NFS outages also with a proper setup of sapcpe. It is typically automated by SAP for ERS instances only. If this Instance is installed on base of a local disk this value must be set to LOCAL. For shared disk this must be set to the volume group name(s) associated with this instance. For shared vgs this is ONLY the vg with the instance directory. If the SAP Global Directory (typically /sapmnt/SID) is provided by means of a NFS filesystem this value must be set to 1. Otherwise a NFS outage can bring the RG into config_too_long. Customization of behaviour can be done on base of EC_START_NFS_FAILED setting. Specify the export directory of the filesystem for NFSv3/v4 SAP Global Filesystem. please leave empty if its non NFS directory. Specify the NFSv3/v4 IP for SAP System if it is exported. Please leave empty if its non NFS directory. If set to 1 the logic verifies also the health of the replication (only for CS Instances). Otherwise empty. Define the maximum startup time (value * 5 seconds) a CS or ERS instance is provided to entirely build up its replication table to avoid false failovers. Empty for application server instances. Redirect a logfile for the Start/Stop/Monitorscripts backend. Ensure this location is writeable and has at least 4 GB space. Setup adequate monitoring of filesystem space.This level will not have any effect for SystemMirror Notification methods. 0=Disable all script based notifications 1=Monitor will send notification for YELLOW core processes of the instance 2=Monitor will send notification for all Yellow and worse processes where operation is continuing 4=startscript will send notification on special events Combinations by adding values allowed ==> 9 levels. 159 Specify the SAP Administrator User id. Specify the sap environment value. Select The logging level for start/stop/monitor SAP script for debug logging. Exit code of the startscript of the instance. If set to 1 the Rg will get into ERROR state, waiting for manual recovery. Otherwise the application monitor will trigger a failover to try to recover. Exit code of the startscript of the instance's sapstartsrv can not be started. Without a sapstartsrv a startscript will never manage to bring up the instance. If set to 1 the RG will get into ERROR state, waiting for manual recovery. Otherwise the application monitor will trigger a failover to try to recover. If the SAP System's Global directory is not build on base of NFS this field must be empty. Otherwise this is the exit code of the startscript in case the SAP Global is down. If set to 1 the RG will get into ERROR state, waiting for manual recovery. Otherwise the application monitor will trigger a failover to try to recover. If set to 0 this variable disables the monitor of this application for the duration the sapstartsrv can not be started as the instance as such can be still fully operational. A Notification of this is send when the NOTIFY_* parameters are configured and enabled. Please set the value to 1 if the gateway of this specific instance is vital for your business. This will trigger a recovery of the entire instance (restart/failover). If the gateway is not critical you can choose to enable a notification method instead. 167 1 = enable for the System's CS instances the HA framework between SAP and PowerHA. This is mandatory for SAP's BC optimizations. 1= enable for the System's ERS instances the HA framework between SAP and PowerHA. This is mandatory for SAP's BC optimizations. 1= enable for the System's application server instances the HA framework between SAP and PowerHA. This is mandatory for SAP's BC optimizations. Logfile location for the OS-Connector of the SAP HA API. If empty it will be placed into the Instance work directory or into /usr/sap/hostcontrol/work/sap_powerha_script_connector.log. Log level of OSCON logging. A SAP system can be healthy running, be broken or running in a degraded mode. The provided script will be passed input parameters depending on the identified issue which degrades the functionality but do not necessarily require a failover. The level of logging is dependent on the level set for NOTIFICATION_LEVEL. This script is in the obligation of the admin to be working properly. This level will not have any effect for SystemMirror Notification methods. 0=Disable all script based notifications 1=Monitor will send notification for YELLOW core processes of the instance 2=Monitor will send notification for all Yellow and worse processes where operation is continuing 4=startscript will send notification on special events Combinations by adding values allowed ==> 9 levels. Specify the SAP Administrator User id. SAP Application instance name which is selected prior to entering this SMIT screen. SAP AS instance name which is selected prior to entering this SMIT screen. The primary node which must be a node already defined to the PowerHA SystemMirror cluster and is hosting /export/sapmnt/SID and /export/usr/sap/trans filesystems. Shared Volume Groups The default list of volume groups auto-discovered by the SAP Smart assist which is hosting /export/sapmnt/SID and /usr/sap/trans filesystems. Filesystems/Directories to Export (NFSv2/3) The filesystems which need to be exported. Please ensure the list includes atleast /export/sapmnt/SID and /export/usr/sap/trans. Select 1 to enable CS instances HA framework between SAP and PowerHA SystemMirror This is mandatory for SAP's BC optimizations. Please refer to your SAP guides. Select 1 to enable ERS instances HA framework between SAP and PowerHA SystemMirror This is mandatory for SAP's BC optimizations. Please refer to your SAP guides. Select 1 to enable AS instances HA framework between SAP and PowerHA SystemMirror This is mandatory for SAP's BC optimizations. Please refer to your SAP guides. Participating NodesThese are the participating nodes for the Resource group created for ERS instance. However the order of nodes will be decided based on the primary node selected for its corresponding Central Instance. SAP Smart Assist1.SAP NW 7.0 Global Filesystem2.SAP NW 7.0 SCS Instance3.SAP NW 7.0 ERS Instance4.SAP NW 7.0 AS Instance5.SAP Database InstanceSAP Smart AssistSAP NetWeaver Global FilesystemSAP NetWeaver (A)SCS InstanceSAP NetWeaver ERS InstanceSAP NetWeaver AS InstanceSAP Database InstanceERROR: The PowerHA SystemMirror resource group: %s already exists. Please choose another name for the application. Unexpected error encountered while attempting to create resource group: %s ERROR: The PowerHA SystemMirror application server: %s already exists. Please choose another name for the application. Unexpected error encountered while attempting to create PowerHA SystemMirror application server: %s Unexpected error encountered while attempting to create PowerHA SystemMirror application monitor: %s ERROR: The PowerHA SystemMirror application monitor: %s already exists. Please choose another name for the application. ERROR: Volume group: %s is already defined to PowerHA SystemMirror. Please only select volume groups that have not been added to an PowerHA SystemMirror resource group. ERROR: Service IP label: %s is already defined to PowerHA SystemMirror resource group: %s Please choose a service IP label that does not already participate in an PowerHA SystemMirror resource group. ERROR: Node %s was used more than once in the takeover or primary node lists. Please only use a node once in either the primary or takeover node lists. Invalid application name: %s. Valid PowerHA SystemMirror names must be at least one character long and can contain characters ([A-Z, a-z]), numbers ([0-9]) and '_' (underscore). A name cannot begin with a number and a PowerHA SystemMirror reserved word cannot be a valid name. Unexpected error encountered while attempting to create resource group: %s Unexpected error encountered while setting up SAP Global Environment. Unexpected error encountered while attempting to associate the PowerHA SystemMirror resource group: %s with the Oracle smart assist application %s. ERROR: The Service IP label: %s is not resolvable on the local system. Please check to ensure the IP label is resolvable via either DNS, or /etc/hosts. ERROR: Unable to create the service IP label: %s There are no available PowerHA SystemMirror networks defined. Please either defined one or more PowerHA SystemMirror networks. Or, alternatively define the service IP label within the PowerHA SystemMirror configuration prior to re-running the Oracle smart assistant. Unexpected error encountered while attempting to create PowerHA SystemMirror service IP label: %s Unexpected error encountered while attempting to modify resource group: %s Unexpected error encountered while attempting to remove application monitor: %s. Unexpected error encountered while attempting to remove application server: %s. Unexpected error encountered while attempting to remove resource group: %s. Unexpected error encountered while attempting to remove application: %s SAP smart assist component references from the PowerHA SystemMirror cluster configuration. Unable to find SAP Global Filesystem resources configured on the cluster. Please configure SAP Global Filesystem using Smart Assist for SAP. Unable to find Startup profile %s for instance %s Discovered instance number for instance %s is not matching. Possible wrong instance startup profile %s. Unable to create Parent/child relationship between %s and %s SAP Global File system is not configured on %s node where %s has been selected to run. SAP Instance name: %s is configured to run on node %s with an IP %s which is found to be an a non aliased address. Unable to find SAP SCS instance resources configured on the cluster. Please configure SAP SCS instances using Smart Assist for SAP. "SAP SCS instance(s) is/are not configured on %s node where %s has been selected to run. Participating node list must match the node list of SCS instances. ERROR: %s - Missing argument -a application_name ERROR: %s - Missing Instance Names. ERROR: %s - Missing Virtual IPs. ERROR: %s - Missing Instance Numbers. ERROR: %s - Missing SAPSYSTEMNAME. Unable to find instance profile %s for instance %s HA Polling script details for instance %s are not found in it's instance profile : %s. Please ensure parameter enque/enrep/hafunc_check is set to /usr/es/sbin/cluster/sa/sap/sbin/cl_scsstandbycheck Wrong HA Polling script details for instance %s in it's instance profile : %s. Please ensure parameter enque/enrep/hafunc_check is set to /usr/es/sbin/cluster/sa/sap/sbin/cl_scsstandbycheck Enqueue replication server is not running with required patch level. please ensure that enrepserver -v shows patch level 152 or greater. WARNING: Did not discover Smart Assist enabled NFSv4 SAP Global File System. Continuing on the assumption that administrator has used alternate methods (GPFS, NFS in another cluster etc) to setup highly available SAP Global File System. Please follow below steps after successful addition of ERS instance to avoid false ERS restart. If ERS need to be configured with PowerHA , Edit the file %s with below changes.1.Comment the line containing Autostart = 1 2. Edit the line Restart_Program_00 by replacing Restart_Program_00 as Start_Program_00 The IP address "%1$s" specified for SAP Instance "%2$s" must be configured as an alias IP address on any one of the nodes participating in this instance. ERROR: The PowerHA SystemMirror application controller: %s already exists. Please choose another name for the application. Unexpected error encountered while attempting to create PowerHA SystemMirror application controller: %s Unexpected error encountered while attempting to remove application controller: %s. ERROR: Node "%1$s" which is given in the participating nodes is not part of the cluster. Please only use nodes which are part of the cluster. ERROR: Unable to create file system for stable storage path. WARNING: Failed to restore GLVM replicated resource for Smart Assist Resource Group %1$s. Run following command manually to make sure GLVM statistics are collected and shown in PowerHA GUI. /usr/es/sbin/cluster/glvm/utils/cl_glvm_configuration -c %1$s Adding %s Resource Group to support SAP Global Filesystem. Creating service IP label: %s Modifying SAP Global Filesystem details., Removing existing PowerHA SystemMirror cluster components. Adding Discovered SAP SCS instances: %s to PowerHA SystemMirror configuration. Adding %s Resource Group to support SAP SCS instances: %s Modifying SAP SCS Instance details. Removing existing cluster components. Creating PowerHA SystemMirror application server: %s Creating PowerHA SystemMirror application monitor: %s Adding Discovered SAP AS instance: %s to PowerHA SystemMirror configuration. Setting SAPSYSTEMNAME to %s. Adding Discovered SAP ERS instances: %s to PowerHA SystemMirror configuration. Adding %s Resource Group to support SAP ERS instances: %s ERROR: Specified Notification Script does not have executable permissions. ERROR: Specified SAP Profile is empty. ERROR: Specified Executable directory does not have SAP executables in it. ERROR: Specified SAP Executables path is not a directory. ERROR: Unable to write to specified SAP Logger log file. ERROR: Specified DATABASE RG is not valid. Creating PowerHA SystemMirror application controller: %s ERROR: %s : Failed to start SAP SCS instance %s with virtual IP %s Successfully STARTED SAP SCS instance %s with virtual IP %s ERROR: %s : Failed to stop SAP SCS instance %s with virtual IP %s Successfully STOPPED SAP SCS instance %s with virtual IP %s WARNING: %s : Failed to stop sapstartsrv process for SCS instance %s Successfully STOPPED sapstartsrv process for SCS instance %s ERROR: %s : No output from sapcontrol for process status ERROR: %s : No output from sapcontrol for process names SAP SCS instance %s process %s is running and it's status is : %s ERROR : SAP SCS instance %s process %s status found to be: %s sapstartsrv process for instance %s is running ERROR: sapstartsrv process for SCS instance : %s is not running sapstartsrv for SCS instance : %s is running ERROR: %s : Failed to start SAP AS instance %s with virtual IP %s Successfully STARTED SAP AS instance %s with virtual IP %s ERROR: %s : Failed to stop SAP AS instance %s with virtual IP %s Successfully STOPPED SAP AS instance %s with virtual IP %s WARNING: %s : Failed to stop sapstartsrv process for AS instance %s Successfully STOPPED sapstartsrv process for AS instance %s ERROR: %s : No output from sapcontrol for process status ERROR: %s : No output from sapcontrol for process names SAP AS instance %s process %s is running and it's status is : %s ERROR : SAP AS instance %s process %s status found to be: %s sapstartsrv process for instance %s is running ERROR: sapstartsrv process for AS instance : %s is not running sapstartsrv for AS instance : %s is running ERROR: %s : Failed to start SAP ERS instance %s with virtual IP %s Successfully STARTED SAP ERS instance %s with virtual IP %s ERROR: %s : Failed to stop SAP ERS instance %s with virtual IP %s Successfully STOPPED SAP ERS instance %s with virtual IP %s WARNING: %s : Failed to stop sapstartsrv process for ERS instance %s Successfully STOPPED sapstartsrv process for ERS instance %s ERROR: %s : No output from sapcontrol for process status ERROR: %s : No output from sapcontrol for process names SAP ERS instance %s process %s is running and it's status is : %s ERROR : SAP ERS instance %s process %s status found to be: %s sapstartsrv process for instance %s is running ERROR: sapstartsrv process for ERS instance : %s is not running sapstartsrv for ERS instance : %s is running SAP NW7.0 GFSSAP NW7.0 ERSINSTANCEProblem with XML configuration file. Ensure a valid XML file supplied. Problem in parsing %1$s tag in function %2$s Primary Node %1$s is not valid in the cluster. One of the Takeover node from %1$s is not valid in the cluster. Supplied directory %1$s for BASE_DIRECTORY_PATH does not exist. Supplied directory %1$s for TRANS_DIRECTORY_PATH does not exist. Supplied directory %1$s for INSTANCE_DIRECTORY for ASCS Instance does not exist. Supplied directory %1$s for INSTANCE_DIRECTORY for SCS Instance does not exist. ASCS instance name can't be null in the supplied XML FileASCS instance number can't be null in the supplied XML FileASCS instance service IP can't be null in the supplied XML FileASCS instance profile directory can't be null in the supplied XML FileSCS instance name can't be null in the supplied XML FileSCS instance number can't be null in the supplied XML FileSCS instance service IP can't be null in the supplied XML FileSCS instance profile directory can't be null in the supplied XML FilePrimary node for ASCS instance and SCS instance are not the sameTakeover node for ASCS instance and SCS instance are not the sameASCS ERS instance name can't be null in the supplied XML FileASCS ERS instance number can't be null in the supplied XML FileASCS ERS instance service IP can't be null in the supplied XML FileASCS ERS instance profile directory can't be null in the supplied XML FileSCS ERS instance name can't be null in the supplied XML FileSCS ERS instance number can't be null in the supplied XML FileSCS ERS instance service IP can't be null in the supplied XML FileSCS ERS instance profile directory can't be null in the supplied XML FilePrimary node for ASCS ERS instance and SCS instance are not the sameTakeover node for ASCS ERS instance and SCS instance are not the sameSupplied directory %1$s for INSTANCE_DIRECTORY for ASCS ERS Instance does not exist. Supplied directory %1$s for INSTANCE_DIRECTORY for SCS ERS Instance does not exist. SCS instance name can't be null in the supplied XML FileSCS instance number can't be null in the supplied XML FileSCS instance service IP can't be null in the supplied XML FileSCS instance profile directory can't be null in the supplied XML FileASCS Service Network can't be null in the supplied XML FileASCS Volume Group can't be null in the supplied XML FileSCS Service Network can't be null in the supplied XML FileSCS Volume Group can't be null in the supplied XML FileASCS ERS Service Network can't be null in the supplied XML FileASCS ERS Volume Group can't be null in the supplied XML FileSCS ERS Service Network can't be null in the supplied XML FileSCS ERS Volume Group can't be null in the supplied XML FileAS Service Network can't be null in the supplied XML FileAS Volume Group can't be null in the supplied XML FileIs this an NFS mountpoint?SAPMNT Export DirectoryNFS IPNotification ScriptSAPADMUSERSAPENVLOGGER LOGFILEOSCON OnOff APPOSCON OnOff CSOSCON OnOff ERSERROR: SAP Instance "%1$s" is already configured to be highly available through PowerHA SM. Exiting. INFO: Since the VG "%1$s" seems to be already defined as a resource to PowerHA SM, the instance "%2$s" will be grouped with the Resource Group "%3$s". INFO: Please note that the instance "%1$s" will be grouped with RG "%2$s" of Application "%3$s". INFO: Since the Service IP label seems to be already defined as a resource to PowerHA SM, The instance "%1$s" will be grouped with the Resource Group "%2$s". INFO: Modifying resource group "%1$s" for Instance "%2$s". INFO: Local Configuration tuning for "%1$s". INFO: Adding Dependency with "%1$s". INFO: SAP Instance "%1$s" Executable directory is "%2$s". INFO: SAP Instance "%1$s" Profile is "%2$s". INFO: Discovering NFS information. INFO: Updating SAP GLOBALS. INFO: Successfully configured instance "%1$s" with application "%2$s". ERROR:Unexpected error encountered while attempting to associate this application %s with PowerHA SystemMirror. Unable to change Runtime policy for SAP_SCS_RG. Adding %s Resource Group to support SAP AS instance: %s Supplied directory %1$s for INSTANCE_DIRECTORY for AS Instance does not exist. INFO: Successfully configured Application %s.WARN: Since local IP configuration is detected, A single node RG would be created and the node preference would be ignored for %sERROR: The value entered for %s is not valid Unable to find the corresponding SAP Central Service instance resources configured on the cluster. Please configure SAP Central Service instances using Smart Assist for SAP. "SAP Central Service instance(s) is/are not configured on "%1$s" node where "%2$s" has been selected to run. Participating node list must match the node list of Central Service instances. The resource group %s must be offline in order to change a resource group's behavioral polices. INFO: Please follow below steps for avoiding NFS outage using below steps The directory %s is not present in LIBPATH. Please execute below command. 1. su - %s 2. echo "setenv LIBPATH ${LIBPATH}:"%s" " >> .cshrc /usr/sap/sapservices is not present in %s /usr/sap/sapservices is not same on nodes %s and %s 2. echo "env LIBPATH=${LIBPATH}:"%s" " >> .cshrc "%1$s" Instance "%2$s" of "%3$s" - has its SAP Global filesystem on a NFS based share. Availability is evaluated now. "%1$s" Instance "%2$s" of "%3$s" - The SAP Global filesystem is unavailable. Start procedure stopped. Please resolve any issues in the NFS Server/Client before continue. Exit code of start script is "%4$s". "%1$s" Instance "%2$s" of "%3$s" - The SAP Global filesystem is available. Continue to start instance. "%1$s" Instance "%2$s" of "%3$s" - The sapstartsrv process failed to start. Start procedure stopped. Please evaluate the SAP logfile /usr/sap/"%4$s"/"%5$s"/work/sapstartsrv.log. Exit code of start script is "%6$s". "%1$s" Instance "%2$s" of "%3$s" - The sapstartsrv is successfully started. Continue to start instance. "%1$s" Instance "%2$s" of "%3$s" - Instance is already running. No restart is performed. "%1$s" Instance "%2$s" of "%3$s" - Instance is already running. But ENSA/ERS replication status is not GREEN. The PowerHA application monitor notification method is called if specified. "%1$s" Instance "%2$s" of "%3$s" - Start cleanup of remainders of a previous startup. "%1$s" Instance "%2$s" of "%3$s" - The attempt to stop the instance using sapcontrol -function Stop failed. Cleanup will be done by killing processes and cleanup of shared memory segments. "%1$s" Instance "%2$s" of "%3$s" - Cleanup finalized. Instance is cleaned up and sapstartsrv is running. "%1$s" Instance "%2$s" of "%3$s" - Start instance returned with a returncode of "%4$s". "%1$s" Instance "%2$s" of "%3$s" - Start completed successfully. "%1$s" Instance "%2$s" of "%3$s" - Start completed successfully. But timeout specified for instance startup was not sufficient. "%1$s" Instance "%2$s" of "%3$s" - The Instance startup finished without error. The call sapcontrol -function WaitforStarted exit with error. "%1$s" Instance "%2$s" of "%3$s" - The Instance startup failed. Exit start script with exit code "%4$s". "%1$s" Instance "%2$s" of "%3$s" - For Central Services and ERS instances the replication health status is verified now. "%1$s" Instance "%2$s" of "%3$s" - Lock table fully created. "%1$s" Instance "%2$s" of "%3$s" - The ERS has not finalized to build its content or is not started. "%1$s" Instance "%2$s" of "%3$s" - Specified timeout was not sufficient to fully build up replication table. Not correcting this will result in a loss of information when failing over. "%1$s" Instance "%2$s" of "%3$s" - Replication is not working. Please correct this instantly if this installation is enabled for ENSA/ERS. "%1$s" Instance "%2$s" of "%3$s" - The instance is running. Replication is not working well. Returncode: "%4$s". ("%5$s"=replicating but hanging behind, else replication Stopped or broken.) "%1$s" Instance "%2$s" of "%3$s" - Ensure sapstartsrv process is operational or can be started. "%1$s" Instance "%2$s" of "%3$s" - The sapstartsrv process failed to start. Monitor script exit with "%4$s". "%1$s" Instance "%2$s" of "%3$s" - Start health check. "%1$s" Instance "%2$s" of "%3$s" - The instance is running. "%1$s" Instance "%2$s" of "%3$s" - There is no enqueue replication server instance in place. This is due to not setting up an ERS instance or if the cluster is running on single node. "%1$s" Instance "%2$s" of "%3$s" - The instance is running and replicating. "%1$s" Instance "%2$s" of "%3$s" - The instance might need to be recovered. Start evaluation of severity. "%1$s" Instance "%2$s" of "%3$s" - The core processes are inoperative. The accumulated returncode: "%4$s". (2|3=error en, 20|30=error ms, 22<->33=both error, >33 also the gw is broken) "%1$s" Instance "%2$s" of "%3$s" - The Gateway process is inoperative. The user defined variable was set to trigger a failover. The accumulated returncode: "%4$s". "%1$s" Instance "%2$s" of "%3$s" - The following processes are indicating a state of YELLOW. If this persists manual corrections are required. The accumulated returncode: "%4$s". (1=en, 10=ms, 100=gw or accumulation) "%1$s" Instance "%2$s" of "%3$s" - The core process is inoperative. The accumulated returncode: "%4$s". "%1$s" Instance "%2$s" of "%3$s" - The processes indicates a state of YELLOW. If this persists manual corrections are required. The accumulated returncode: "%4$s". "%1$s" Instance "%2$s" of "%3$s" - Continue to operate. "%1$s" Instance "%2$s" of "%3$s" - Evaluate health of a JAVA Application server. The accumulated returncode is: "%4$s". "%1$s" Instance "%2$s" of "%3$s" - The jcontrol/jstart is running. This process will be able to recover all remaining issues. "%1$s" Instance "%2$s" of "%3$s" - Functionalty degraded. The jc and ser process are effected. "%1$s" Instance "%2$s" of "%3$s" - Functionalty seriously degraded, maybe broken. The ser0 process is effected. "%1$s" Instance "%2$s" of "%3$s" - The jcontrol/jstart is broken and the server Process with ID 0 is stopped or RED. Monitor will exit with 1. "%1$s" Instance "%2$s" of "%3$s" - Functionalty is degraded. The ig and/or icm are running in state YELLOW. "%1$s" Instance "%2$s" of "%3$s" - The ig and/or icm are not operational. "%1$s" Instance "%2$s" of "%3$s" - Functionalty is degraded. The sdm is not running in state GREEN. "%1$s" Instance "%2$s" of "%3$s" - Evaluate health of a DUAL stack Application server. The accumulated returncode is: "%4$s". "%1$s" Instance "%2$s" of "%3$s" - The jcontrol/jstart and disp+work are running. This processees will be able to recover all remaining issues. "%1$s" Instance "%2$s" of "%3$s" - Functionalty is degraded. The jc and/or ser0 is effected. "%1$s" Instance "%2$s" of "%3$s" - The jcontrol/jstart and/or disp+work are broken. Monitor will exit with 1. "%1$s" Instance "%2$s" of "%3$s" - Functionalty is degraded. The ig and icm are running in state YELLOW. "%1$s" Instance "%2$s" of "%3$s" - Functionalty is degraded. The ig and icm are not operational. "%1$s" Instance "%2$s" of "%3$s" - Functionalty of rslgcoll, and/or rslsend, and/or debugproxy have a non green state. "%1$s" Instance "%2$s" of "%3$s" - Evaluate health of an ABAP Application server. The accumulated returncode is: "%4$s". "%1$s" Instance "%2$s" of "%3$s" - The disp+work is running. This process will be able to recover all remaining issues. "%1$s" Instance "%2$s" of "%3$s" - The disp+work is not operational. Monitor will exit with 1. "%1$s" Instance "%2$s" of "%3$s" - The disp+work is not fully operational. Please investigate. "%1$s" Instance "%2$s" of "%3$s" - Functionalty is degraded. The ig and/or icm are running in state YELLOW. "%1$s" Instance "%2$s" of "%3$s" - The ig and icm are not operational. "%1$s" Instance "%2$s" of "%3$s" - Functionality of gw not available. "%1$s" Instance "%2$s" of "%3$s" - Stack type could not be determined. No health check will be performed. "%1$s" Instance "%2$s" of "%3$s" - Found the service IP alias of "%4$s". No er process restart from now. "%1$s" Instance "%2$s" of "%3$s" - Start to move ERS away after the Instance "%4$s" took over the replication table. "%1$s": SAPSYSTEMNAME not found. Quit immediately with Exit code 1. "%1$s": Environment setup failed. Quit immediately with Exit code 1. "%1$s": Application ID not passed as input Parameter. Quit immediately with Exit code 1. Please add application ID as input parameter to the PowerHA Smart Assist Application Servers. Failed to get value of "%1$s". "%1$s" Instance "%2$s" of "%3$s" - The sapstartsrv process failed to start. The instance and its sapstartsrv process will be manually cleand up. The instance Type ERS will not be stopped. "%1$s" Instance "%2$s" of "%3$s" - The instance and its sapstartsrv is manually cleand up. ERS processes will be still running. "%1$s" Instance "%2$s" of "%3$s" - The sapstartsrv process is running. Stop the instance using sapcontrol. "%1$s" Instance "%2$s" of "%3$s" - The instance is not of Type ERS. Stop instance now. "%1$s" Instance "%2$s" of "%3$s" - The instance failed to stop using sapcontrol -function Stop. Instance will be cleaned up manually. "%1$s" Instance "%2$s" of "%3$s" - The instance finshed manual cleanup. "%1$s" Instance "%2$s" of "%3$s" - Now stop sapstartsrv process. "%1$s" Instance "%2$s" of "%3$s" - The instance's sapstartsrv process failed to stop using sapcontrol -function StopService. Sapstartsrv will be cleaned up manually. "%1$s" Instance "%2$s" of "%3$s" - The instance and its sapstartsrv are stopped. "%1$s" Unable to identify the enqueue replicator type for the instance "%2$s". "%1$s" Instance "%2$s" of "%3$s" - Start moving ERS away from here.