FISO8859-1 6"!9[l     & #' BB j. - ' 5 29 h+ /   ' '. O2 ~s w % RqJ G     +DrpqU`u@SA4)%=;cD7C2 @v < 0 3 % > Y> , #!*!(^!S!&! !C"<"^""'";"<#"(/#_G#L#W$O$l+$=$'%& %N (%l (% +% %q&<&zT&^' _'kC'@(H(P/(D(H)L)WM)K)7*>>*v0*1* 2+!-+K"[+y#E+$,,%?,H&.,'?,(E,)0-=*9-n+7-,<--4...R/O.2L/1;/~/<(/<00>30Z0Y0W1<1] !1 ?1 L1 72I -2W233#3C3`<3rO383-48Y4f4F[2445J5C05W5Z6m6r06 ,7 77> ,7v .7 7(8Rl8{.85929M29/909$::9:R(:f.::":*:';"5;J :;!;";#9;$#<&%B>.#>j/">0>1>2k>3<?C46?5y?6v@17@8@9h@:A7;jAP<3A=yA>vBi?B@NClACBCCGCD,D5EVDbF6DG6DH0E'I1EXJ6EK;EL:EM$F8NCF]OPFP'FQDGRLG_S'GT;GUzHVkHW8HXzI0YqIZnJ[mJSAP MaxDB Smart AssistSelect the SAP MaxDB instance to make highly availableSelect SAP MaxDB Database InstanceAdd a SAP MaxDB Database InstanceApplication NameMaxDB Instance NameMaxDB Administrator UserMaxDB Administrator PasswordPrimary NodeTakeover Node(s)Service IP LabelShared Volume Group(s)Modify the SAP MaxDB Database InstanceSAP MaxDB Database Instance(s)MaxDB Software Owner Verification CheckSelect the SAP MaxDB Hot Standby instance to make highly availableSelect SAP MaxDB Hot Standby Database InstanceAdd a SAP MaxDB Hot Standby Database InstanceSAP liveCache Hot Standby Instance NameSAP liveCache Hot Standby Instance Administrator UserSAP liveCache Hot Standby Instance Administrator PasswordSAP liveCache Hot Standby Instance DBM UserSAP liveCache Hot Standby Instance DBM PasswordData Volume Group(s)Log Volume Group(s)liveCache Global Filesystem Mount pointSAP liveCache Hot Standby Database Instance(s)SAP liveCache Hot Standby Instance DBM User XUSER Select the SAP MaxDB Database instance to configure it for highly available in a switchover cluster configuration. Add a SAP MaxDB database instance to the PowerHA SystemMirror cluster configuration. The PowerHA SystemMirror Smart Assist for SAP MaxDB will configure a resource group containing an application server to bring up/down the selected SAP MaxDB Database instance. In addition, an PowerHA SystemMirror application monitor will be added to monitor the core Database processes. Application Name An PowerHA SystemMirror cluster wide unique name. The application name is a container for the PowerHA SystemMirror resource groups, application servers, and other cluster components generated to support the SAP MaxDB Database instance The application name must conform to the same rules for naming PowerHA SystemMirror resource groups. The name can be no longer than 64 characters, and must only contain alphanumerics and underline (_) characters. MaxDB Instance Name Running SAP MaxDB Instance selected in the prior SMIT screen. MaxDB Administrator User SAP MaxDB database system administrator user ID. MaxDB Administrator Password Password for the above database system administrator. This would be used to start/stop/monitor the database. Primary Node The primary node must be a node already defined to the PowerHA SystemMirror cluster. If the desired node is not already part of the PowerHA SystemMirror cluster configuration the node must be added to the configuration prior to entering this SMIT dialog. Takeover Node(s) The takeover node(s) must be an already existing PowerHA SystemMirror cluster node. Takeover nodes will participate in the resource groups generated to support the SAP MaxDB Database instance, and in the event of a failure will takeover the selected SAP MaxDB Database instance designated in this SMIT dialog. Service IP Label It is the service IP label at which SAP MaxDB clinets are getting connected to this database instance. A new service IP label will be created if the label does not already exist in the PowerHA SystemMirror cluster configuration. Shared Volume Groups The Smart Assist for SAP MaxDB will auto-discover the volume groups associated with the selected database instance. Additional volume groups can be manually entered in this SMIT field. Modify the SAP MaxDB Database Instance Modify an existing PowerHA SystemMirror resourced defined for SAP MaxDB database instance. The existing PowerHA SystemMirror components will be modified for the selected Database instance. The PowerHA SystemMirror Smart Assist for SAP MaxDB will configure a resource group containing an application server to bring up/down the selected SAP MaxDB Database instance. In addition, an PowerHA SystemMirror application monitor will be added to monitor the core Database processes. Select the SAP liveCache Hot Standby Database instance to configure it for highly available in a switchover cluster configuration. Add a SAP liveCache database instance to the PowerHA SystemMirror cluster configuration. The PowerHA SystemMirror Smart Assist for SAP liveCache Hot Standby will configure a resource group containing an application server to bring up/down the selected SAP liveCache Hot Standby Database instance. SAP liveCache Hot Standby Instance Name Running liveCache Hot Standby Instance selected in the prior SMIT screen. SAP liveCache Hot Standby Instance Administrator User SAP liveCache Hot Standby Instance administrator user ID. SAP liveCache Hot Standby Instance Administrator Password Password for the above SAP liveCache Hot Standby Instance administrator user. SAP liveCache Hot Standby Instance DBM User ID SAP liveCache Hot Standby Instance DBM user ID. SAP liveCache Hot Standby Instance DBM Password Password for the above SAP liveCache Hot Standby Instance DBM user. Data Volume Groups The Smart Assist for SAP MaxDB will auto-discover the data volumes VG associated with the selected database instance. Additional volume groups can be manually entered in this SMIT field. Log Volume Groups The Smart Assist for SAP MaxDB will auto-discover the log volume VG associated with the selected database instance. Additional volume groups can be manually entered in this SMIT field. liveCache Global Filesystem Mount point Add A Global Filesystem which is accessible to all the nodes. This Filesystem will be used as Lock Directory for Hot Standby. This mount point should always be made avaiable on all cluster nodes SAP liveCache Hot Standby Instance DBM User XUSER ID SAP liveCache Hot Standby Instance DBM user XUSER ID. UserID must be created from XUSER ineterface. %1$s : Using an already avaialble MAXDB_PROGRAM_PATH%1$s : setting MAXDB_PROGRAM_PATH to %2$sdbmcli command is available at : %1$sUnable to locate dbmcli command or dbmcli is not executableMaxDB Version is %1$sMaxDB version %1$s is found. Only MaxDB version 7.7.x.x is supportedNo MaxDB installation found on this node. Can't proceedUnable to read IndepPrograms value from /etc/opt/sdb. Can't proceedDBMCLI_COMMAND is not set. Unable to find any database instancesInstance name is not specified for discovering Volume GorupsPerforming cluster wide volume group discovery. MaxDB config location %1$s is on %2$s Volume GroupUnable to read SdbOwner value from /etc/opt/sdb. Can't proceedUnable to read SdbGroup value from /etc/opt/sdb. Can't proceedUnable to delete Application Monitror : %1$sDeleting Application Monitor : %1$sUnable to delete Application Server : %1$sPlease do a verify and sync from another node in the cluster to restore previos configuration.Deleting Resource Group : %1$sUnable to delete Resource Group : %1$sDeleting File Collection : %1$sUnregistering the application id : %1$s from Smart Assist FrameworkUnable to unregister application with Smart Assist frameworkDeleting Application Server : %1$sUnable to delete service IP label: %1$s# No MaxDB installation found on this node. Cannot proceed.# Unable to get the XUSER information, enter XUSER manually.Invalid application name : %1$sNode %1$s was used more than once in the takeover or primary node listsVolume group: %1$s is already defined to PowerHA SystemMirror configuration.Service IP label: %1$s is already defined to PowerHA SystemMirror resource group : %2$sThe Service IP label: %1$s is not resolvable on the local node using /etc/hostsUnable to create the service IP label: %1$sUnable to get list of instance files from Remote Node : %1$sUnable to copy file %1$s from %2$s nodeCreating File Collection %1$sProblem in creating File Collection %1$sAdding file to %1$s File Collection %2$sProblem in adding %1$s File Collection %2$sDeleting File Collection %1$sUnable to connect to Database using the supplied user id and password. Please ensure correct credentials providedAdding MaxDB Instance to PowerHA SystemMirror configuration.Resource group: %1$s already exists. Please choose another name for the application.Application server name : %1$s already exists. Please choose another name for the application.Application monitor name : %1$s already exists. Please choose another name for the application.Adding Service IP %1$s label to PowerHA SystemMirror configurationAdding Resource Group %1$s to PowerHA SystemMirror configurationError while adding Resource Group to PowerHA SystemMirror Configuration.Rolling back the already created configuration.Adding Application Server %1$s to PowerHA SystemMirror configurationAdding Application monitor to %1$s to PowerHA SystemMirror configurationError while adding Application Server to PowerHA SystemMirror Configuration.Error while adding Application Monitor to PowerHA SystemMirror Configuration.Error while modifying Resource Group to PowerHA SystemMirror Configuration.Registering the application with Smart Assist FrameworkError while registering application to Smart Assist framework.Error while getting MaxDB Software Owner User IDError while getting MaxDB Software Owner Group IDError while getting generating verification scriptError while creating instance file collectionSuccessfully added SAP MaxDB Database Instance %1$s to PowerHA System Mirror ConfigurationProblem with XML configuration file. Ensure a valid XML file suppliedProblem in parsing %1$s tag in function %2$sSupplied directory %1$s for MAXDB_PROGRAM_PATH does not exist. Primary Node %1$s is not valid in the cluster.One of the Takeover node from %1$s is not valid in the cluster.Unable to read the configuration file. Please ensure the correct pathReading XML file for Database related propertiesReading XML file completed. Proceeding for configuration.Duplicate node names for Primary and Takeover node listFailed to create required PowerHA SystemMirror ConfigurationEssential arguement "%1$s" is either NULL or missingINFO: The application "%1$s" will be grouped with the Resource Group "%2$s" as one of the resource is already defined to PowerHA SystemMirror.Volume Group "%1$s" and Service IP "%2$s" should belong to same Resource Group.Unable to fetch MAXDB_PROGRAM_PATH from HACMPsa_metadata ODM. Can't proceed.Deleting the Existing configuration for MaxDB instance %1$sRecreating the configuration.No Application ID passed. Can't proceed.Unable to fetch MAXDB_PROGRAM_PATH from HACMPsa_metadata ODMNo dbmcli executable found.No running x_server process. Attempting to restart.x_server process is running.Instance Credentials are not available in HACMPsa_metadata. Can't monitor instance state.Service IP information for the instance is not available. Can't moniotr instance state.dbmcli command returned a non zero value with info as : %1$sDatabase instance state is ONLINEDatabase state is NOT ONLINE. state information found as : %1$sUnable to stop. dbmcli command returned a non zero value with info as : %1$sAttempting to stop instance is Ok. Verifying the state.Unable to verify the Database instance state.Unable to verify the Database instance state. It seem to be at a different state : %1$sDatabase state is OFFLINE. Now attempting to stop x_serverAttempting to start x_serverx_server Started.Unable to start x_server. Can't proceed with Database start.Unable to start. dbmcli command returned a non zero value with info as : "%1$s"Attempting to start instance is Ok. Verifying the state.Unable to verify the Database instance state.Unable to verify the Database instance state. It seem to be at a different state : "%1$s"Database state is ONLINE. liveCache Standby instance is now being stopped. liveCache Standby instance is successfully stopped. Unable to stop liveCache Standby instance. Manual intervention required. No Standby instance detected. Nothing to stop. MONITOR.run detected. Remove all MONITOR states and arm monitor if LC10.start is set. MONITOR.runtime detected. Remove all MONITOR states and arm monitor if LC10.start is set. Undefined state of LC10. This should be never reached if startup monitor of Master is correctly configured. Service not started by LC10. Monitor not armed. The service of "%1$s" "%2$s" is available. ERROR: The service of "%1$s" "%2$s" is not available. The service of "%1$s" "%2$s" is available. ERROR: Instance "%1$s" "%2$s" is not running. WARNING:This script is not intended to run on this node. The Hot-Standby Database should run on PowerHA cluster nodes only. Start of Service is not allowed by APO. Need to initialize lock directory. Write LC10.stop lock to give the application monitors the right status. Started resource group and x_server process. ERROR: The liveCache Master "%1$s" was not started. liveCache "%1$s" of "%2$s" started successfully. The LiveCache Slave is intentionally not stopped The LiveCache Master is successfully stopped. Unable to stop liveCache Master "%1$s".Exiting Filesystem "%1$s" is not mounted. Cannot execute "%1$s". "%1$s" not found. Instance "%1$s" has a state of "%2$s". Check if x_server is running on node "%1$s". Start x_server now. ERROR: Unable to start x_server. x_server already started on node "%1$s". ERROR: Host "%1$s" is not configured. The state of the liveCache "%1$s" Master is "%2$s". Bringing master instance from OFFLINE into ONLINE state. Sleeping "%1$s" seconds. Resuming from sleep Succesfully started master instance. db_state is "%1$s". Making Standby instance as Master. Master instane is already online. starting standby instance now. Standby Resource group is not online on any node. Standby instance will not be started. Standby Resource group is online on "%1$s" Succesfully started Standby Instance. Slave instance is standby but not registered. Restarting Standby instance. WARNING: Restarting Slave failed. Master liveCache instance is detected. Standby is not yet started. ERROR: No Master instance detected. Standby instance will not be started. Standby instance is now in standby state. cleanup DB-environment of Standby. Starting Slave from Master node. Write LC10.start Remove LC10.stop. The PowerHA application monitors for the MASTER and STANDBY resource groups are brought into active state. Call function lc_start_slave_remote to bring Slave online. APO started service of the Master and SLAVE instance. The SAP APO transaction of SAP System "%1$s" failed to start the Master instance. Please contact your APO administrator SAP Kernel Error. The script lccluster was called with unexpected parameters. Please contact your SAP Administrator. Write LC10.stop Remove LC10.start. The PowerHA application monitors for the MASTER and SLAVE resource group is brought into passive state. Stop STANDBY on "%1$s" SAP APO transaction LC10 stoppped the Master instance successfully but Standby instance stopping failed. APO successfully stopped Master and Slave service. The SAP APO transaction of SAP System "%1$s" failed to stop the Master instance. Please contact your APO administrator. All Application monitoring for LC in the cluster is inactive by intention. Adjust lockfile directory if appropriate. SAP Kernel Error. The script lccluster was called with unexpected parameters from lcinit or dbmsrv. Please contact your SAP Administrator LC_XUSER is not exported.Please export LC_XUSER before invoking Smart Assist Starting runtime monitorStarting startup monitorUsage error: This application monitor is intended to be used this way ERROR: "%1$s" Instance "%2$s" is not runningERROR: Script invoked with wrong arguements. Supported arguements are START and STOP ERROR: The LiveCache Primary "%1$s" was not started. The LiveCache Auxiliary is intentionally not stopped The LiveCache Primary is successfully stopped. Unable to stop LiveCache Primary "%1$s".Exiting The state of the LiveCache "%1$s" Primary is "%2$s". Bringing Primary instance from OFFLINE into ONLINE state. Succesfully started Primary instance. DB state is "%1$s". Making Standby instance as Primary. Primary instane is already online. Starting standby instance now. Auxiliary instance is standby but not registered. Restarting Standby instance. WARNING: Restarting Auxiliary failed. Primary LiveCache instance is detected. Standby is not yet started. ERROR: No Primary instance detected. Standby instance will not be started. Starting Auxiliary from Primary node. APO started service of the Primary and Auxiliary instance. The SAP APO transaction of SAP System "%1$s" failed to start the Primary instance. Please contact your APO administrator SAP APO transaction LC10 stoppped the Primary instance successfully but Standby instance stopping failed. APO successfully stopped Primary and Auxiliary service. The SAP APO transaction of SAP System "%1$s" failed to stop the Primary instance. Please contact your APO administrator. The x_server process for LC connectivity has been stopped. The LiveCache Auxiliary is intentionally not stopped. The PowerHA application monitors for the Primary and Auxiliary resource groups are brought into active state. The PowerHA application monitors for the Primary and Auxiliary resource group is brought into passive state.