ISO8859-1,R 7T   '0! R  7  yyoI{w#R !"cS#S$i %![&"'9#(<$))%)&S*f'+(,f)-4*-+}.,18h55D6U617XF7I78 o8 M97 *: : M;p;S=Y=[>^%>t>;?UF??n@ A-MBNBdCT;C>CD4.D XD!IEN"E#gE$F%sF &F'%F(F)F*GH+WGN,G-fG.H/qH0GH1H2GIc3sI4>J5~K^6GK7~L%8GL9L:M;NU<N=_O>P?8P@QAnQBQCbR8DRExS2FSGSHTOImTJoUPK#ULUM!V}NVOKW8PWQjYRtZS]wT^U^ V^W^X_(Y(_.Zt_W[%_\(_]%`^6`A_(`x`~`aaa baacfadufKeffKfgdg&hgi{h-j5hk%hlimfkn<p0opmp_q qJqir7qsqtlsut"v>wwwTxxDyxzjzI{z|v{y}w{~h|h|/}t}\~cl~L-zZHfG RT!kvTT7VqVUVGJKhD&D_k6)x4t{%G!Y#R}Ka~Sets the number of intervening dispatches after which the SCHED_FIFO2 policy no longer favors a thread. Default: 7; Range: 0 to 100. Once a thread is running with SCHED_FIFO2 policy, tuning of this variable may or may not have an effect on the performance of the thread and workload. Ideal values should be determined by trial and error. Used to determine when threads can be migrated to other processors. Default: 4: Range: 0 to 100. This value is divided by 16 and then multiplied by the load average. The resulting value is used to determine if jobs should be migrated to other nodes (essentially does load balancing). Keep fixed priority threads on global run queue. Default: 0: Range: 0 or 1. If 1, then fixed priority threads are placed on the global run queue. Sets the number of times to spin on a kernel lock before going to sleep. Default: 1 on uniprocessor systems, 16384 on MP systems. Range: -1 to 2^32. Increasing the value or setting it to -1 on MP systems may reduce idle time; however, it may also waste CPU time in some situations. Increasing it on uniprocessor systems is not recommended. The number of clock ticks to wait before retrying a failed fork call that has failed for lack of paging space. Default: 10 (10-millisecond clock ticks), Range: 10 to n clock ticks. Use when the system is running out of paging space and process cannot be forked. The system will retry a failed fork five times. For example, if a fork() subroutine call fails because there is not enough paging space available to create a new process, the system retries the call after waiting the specified number of clock ticks. Sets the short term CPU usage decay rate. Default: 16; Range: 0 to 32. The default is to decay short-term CPU usage by 1/2 (16/32) every second. Decreasing this value enables foreground processes to avoid competition with background processes for a longer time. Sets the weighting factor for short-term CPU usage in priority calculations. Default: 16; Range: 0 to 32. Run: ps al. If you find that the PRI column has priority values for foreground processes (those with NI values of 20) that are higher than the PRI values of some background processes (NI values > 20), you can reduce the r value. The default is to include 1/2 (16/32) of the short term CPU usage in the priority calculation. Decreasing this value makes it easier for foreground processes to compete. The number of clock ticks a thread can run before it is put back on the run queue. Default: 1; Range: A positive integer value. Increasing this value can reduce overhead of dispatching threads. The value refers to the total number of clock ticks in a timeslice and only affects fixed-priority processes. Used to adjust system clock with each clock tick in the correction range -1 to +1 seconds. Default: 100; Range: 1 to 100. This is used to adjust clock drifts. Sets the number of seconds that a recently resumed process that was previously suspended is exempt from suspension. Default: 2. This parameter is only examined if thrashing is occurring. Sets the minimum number of processes that are exempt from suspension. Default: 2. This number is in addition to kernel processes, processes with fixed priority less than 60, processes with pinned memory, or processes awaiting events. This parameter is only examined if there are threads on the suspended queue. Sets the systemwide criteria used to determine when process suspension begins and ends (system is thrashing). Default: 6 unless system RAM is 128 MB or more (in this case it is 0). If v_repage_hi * page_outs/sec is > page_steals, then processes may get suspended. If system is paging and causing scheduler to think it is thrashing but thrashing is not actually occuring, then it may be useful to desensitize the algorithm by decreasing the -h value or setting it to 0. Sets the per-process criterion used to determine which processes to suspend. Default: 4. This requires a higher level of repaging by a given process before it is a candidate for suspension by memory load control. This parameter is examined only if thrashing is occurring. Sets the number of seconds to wait after thrashing ends before making suspended processes runnable. Default: 1. This parameter is examined only if thrashing is occurring. Specifies the maximum number of processes per user ID. Default: 40; Range: 1 to 131072. If value is reduced, then it goes into effect only after a system boot. Users cannot fork any additional processes. This is a safeguard to prevent users from creating too many processes. Sets physical tick interval & synchronizes ticks across cpus. Default: 1: Range 1 to 100. This value times 10 ms is the tick interval, and should evenly divide into 100. Use of this parameter will make system statistics less accurate. Keep non-MPI threads on the global run queue. Default: 0: Range: 0 or 1. If 1, only MPI and bound threads will use local run queues, which may hurt performance. Enables (1) or disables (0) the hardware priority boosting of hot locks. Default: 0 (disabled). Range: 0 to 1. Enables (1) or disables (0) confering to self after trying to aquire krlock krlock_spinb4confer times. This parameter only applies to the 64bit kernel. Default: 1 (enabled). Range: 0 to 1. Enables (1) or disables confering after spinning slock_spinb4confer before trying to acquire or allocating krlock. This parameter only applies to the 64bit kernel. Default: 0 (diabled). Range: 0 to 1. Enables (1) or disables (0) krlocks. This parameter only applies to the 64bit kernel. Default: 1 (enabled). Range: 0 to 1. Number of additional aquisition attempts after spinning slock_spinb4confer, and confering (if krlock_conferb4alloc is on), before allocating krlock. This parameter only applies to the 64bit kernel. Default: 1. Range: 1 to MAXINT. Number of krlock acquisition attempts before confering to the krlock holder (or self). This parameter only applies to the 64bit kernel. Default: 1024. Range: 0 to MAXINT. Number of times to run the low hardware priority loop each time in idle loop if no new work is found. Default: 100. Range: 0 to 1000000. Minimum load above which secondary sibling threads will look for work in the global run queue in the dispatcher. Default: 256. Range: 0 to 4294967040 (0xFFFFFF00). The minimum load above which idle load balancing for secondary sibling threads will search for work in the primary sibling thread's run queue. Default: 64. Range: 0 to 4294967040 (0xFFFFFF00). Minimum load above which the dispatcher will also search the run queues belonging to its sibling hardware threads. This is meant for load balancing on a physical processor and is not the same as idle load balancing as this check is made in the dispatcher when choosing the next job to be dispatched. This works in conjunction with the smtrunq_load_diff tunable. Default: 256. Range: 0 to 4294967040 (0xFFFFFF00). The maximum load below which the secondary sibling threads will try to shed work onto the primary sibling thread's run queue. Default: 64. Range: 0 to 4294967040 (0xFFFFFF00). Minimum system load above which idle secondary sibling threads will be considered for new work even when primary is not idle. Default: 384. Range: 0 to 4294967040 (0xFFFFFF00). Minimum load above which secondary sibling threads will look for work among other run queues owned by CPUs within their S2 affinity domain during idle load balancing. Default: 134. Range: 0 to 4294967040 (0xFFFFFF00). It is recommended that this tunable parameter be never set to a value that is less than the value of sidle_S1runq_mload. Minimum load above which secondary sibling threads will look for work among other run queues owned by CPUs within their S3 affinity domain during idle load balancing. Default: 134. Range: 0 to 4294967040 (0xFFFFFF00). It is recommended that this tunable parameter be never set to a value that is less than the value of sidle_S2runq_mload. Minimum load above which secondary sibling threads will look for work on any local run queues. Default: 4294967040 (0xFFFFFF00). Range: 0 to 4294967040 (0xFFFFFF00). It is recommended that this tunable parameter be never set to a value that is less than the value of sidle_S3runq_mload. Number of attempts for a simple lock before conferring. Default: 1024. Range: 0 to MAXINT. Minimum load difference between sibling run queue loads for a task to be stolen from the sibling's run queue. This is enabled only when the load is greater than the value for the search_smtrunq_mload tunable. Default: 2. Range: 1 to 4294967040 (0xFFFFFF00). Amount of time in microseconds in idle loop without useful work before snoozing (calling h_cede). A value of -1 indicates to disable snoozing, a value of 0 indicates to snooze immediately. Default: 0. Range: -1 to 100000000 (100 secs). Enables (1) or disables (0) the unboost of the hot lock priority in the flih. When disabled, the unboost occurs in the dispatcher. Default: 1 (enabled). Range: 0 to 1. Setting this parameter to 1 will weaken the barriers for migration of threads between MCMs under light load conditions. Default: 0. Range: 0 or 1. Setting this tunable to a value greater than -1 will enable the scheduler to enable and disable virtual processors based on the partition's CPU utilization. The value specified signifies the number of virtual processors to enable in addition to the virtual processors required to satisfy the workload. Default: 0. Range: -1 to 2147483647. Number of ticks to consider a thread busy for the purposes of optimisation for "thread_busy" load balancing. Default: 100 (1 second). Range: 10 - 1000 (0.1 - 10 seconds). Controls SMT-cores busy balancing. Default: 0 (balancing disabled). Range: 0, 1 or 2. A value of 1 indicates balancing enabled only within MCMs (S2 groups). A value of 2 indicates balancing fully enabled. Controls chipset busy balancing. Default: 2 (fully enabled). Range: 0, 1 or 2. A value of 0 indicates balancing disabled. A value of 1 indicates balancing enabled only within MCMs (S2 groups). Controls SMT-cores busy balancing. Default: 2 (fully enabled). Range: 0, 1 or 2. A value of 0 indicates balancing disabled. A value of 1 indicates balancing enabled only within MCMs (S2 groups). Specifies the utilization threshold for donation of a dedicated processor. Default: 80; Range: 0 to 100%%. In a dedicated processor partition that is enabled for donation, idle processor capacity can be donated to the shared processor pool for use by shared processor partitions. If a dedicated processor's utilization is less than this threshold, the dedicated processor will be donated for use by other partitions when the processor is idle. If a dedicated processor's utilization is equal to or greater than this threshold, the dedicated processor will not be donated for use by other partitions when the dedicated processor is idle. Controls the application of the virtual processor management feature of processor folding in a partition. Default: 1 (processor folding is enabled if shared processors are being used, but disabled if dedicated processors are being used). Range: 0, 1, 2 or 3. The virtual processor management feature of processor folding can be enabled or disabled based on whether a partition is a shared processor partition or a dedicated processor partition. When processor folding is enabled via this tunable, the vpm_vxcpus tunable can be used to control processor folding. A value of 0 for this tunable indicates processor folding is disabled regardless of whether a partition is using shared processors or dedicated processors. A value of 2 indicates processor folding is disabled if shared processors are being used, but enabled if dedicated processors are being used. A value of 3 indicates processor folding is enabled regardless of whether a partition is using shared processors or dedicated processors Sets the number of intervening dispatches after which the SCHED_FIFO2 policy no longer favors a thread. Once a thread is running with SCHED_FIFO2 policy, tuning of this variable may or may not have an effect on the performance of the thread and workload. Ideal values should be determined by trial and error. Used to determine when threads can be migrated to other processors. This value is divided by 16 and then multiplied by the load average. The resulting value is used to determine if jobs should be migrated to other nodes (essentially does load balancing). Keep fixed priority threads on global run queue. If 1, then fixed priority threads are placed on the global run queue. Sets the number of times to spin on a kernel lock before going to sleep. Increasing the value on MP systems may reduce idle time; however, it may also waste CPU time in some situations. Increasing it on uniprocessor systems is not recommended. The number of clock ticks to wait before retrying a failed fork call that has failed for lack of paging space. Use when the system is running out of paging space and a process cannot be forked. The system will retry a failed fork five times. For example, if a fork() subroutine call fails because there is not enough paging space available to create a new process, the system retries the call after waiting the specified number of clock ticks. Sets the short term CPU usage decay rate. The default is to decay short-term CPU usage by 1/2 (16/32) every second. Decreasing this value enables foreground processes to avoid competition with background processes for a longer time. Sets the weighting factor for short-term CPU usage in priority calculations. Run the command ps al. If you find that the PRI column has priority values for foreground processes (those with NI values of 20) that are higher than the PRI values of some background processes (NI values > 20), you can reduce the r value. The default is to include 1/2 (16/32) of the short term CPU usage in the priority calculation. Decreasing this value makes it easier for foreground processes to compete. The number of clock ticks a thread can run before it is put back on the run queue. Increasing this value can reduce overhead of dispatching threads. The value refers to the total number of clock ticks in a timeslice and only affects fixed-priority processes. Used to adjust system clock with each clock tick in the correction range -1 to +1 seconds. This is used to adjust clock drifts. Sets the number of seconds that a recently resumed process that was previously suspended is exempt from suspension. This parameter is only examined if thrashing is occurring. Sets the minimum number of processes that are exempt from suspension. This number is in addition to kernel processes, processes with fixed priority less than 60, processes with pinned memory, or processes awaiting events. This parameter is only examined if there are threads on the suspended queue. Sets the systemwide criteria used to determine when process suspension begins and ends (system is thrashing). If v_repage_hi * page_outs/sec is > page_steals, then processes may get suspended. If system is paging and causing scheduler to think it is thrashing but thrashing is not actually occuring, then it may be useful to desensitize the algorithm by decreasing the -h value or setting it to 0. Sets the per-process criterion used to determine which processes to suspend. This requires a higher level of repaging by a given process before it is a candidate for suspension by memory load control. This parameter is examined only if thrashing is occurring. Sets the number of seconds to wait after thrashing ends before making suspended processes runnable. This parameter is examined only if thrashing is occurring. Sets physical tick interval & synchronizes ticks across cpus. This value times 10 ms is the tick interval, and should evenly divide into 100. Use of this parameter will make system statistics less accurate. Keep non-MPI threads on the global run queue. If 1, only MPI and bound threads will use local run queues, which may hurt performance. Enables (1) or disables (0) the hardware priority boosting of hot locks. N/A. Enables (1) or disables (0) confering to self after trying to aquire krlock krlock_spinb4confer times. N/A. Enables (1) or disables confering after spinning slock_spinb4confer before trying to acquire or allocating krlock. N/A. Enables (1) or disables (0) krlocks. N/A. Number of additional aquisition attempts after spinning slock_spinb4confer, and confering (if krlock_conferb4alloc is on), before allocating krlock. N/A. Number of krlock acquisition attempts before confering to the krlock holder (or self). N/A. Number of times to run the low hardware priority loop each time in idle loop if no new work is found. N/A. Minimum load above which secondary sibling threads will look for work in the global run queue in the dispatcher. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. The minimum load above which idle load balancing for secondary sibling threads will search for work in the primary sibling thread's run queue. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Minimum load above which the dispatcher will also search the run queues belonging to its sibling hardware threads. This is meant for load balancing on a physical processor and is not the same as idle load balancing as this check is made in the dispatcher when choosing the next job to be dispatched. This works in conjunction with the smtrunq_load_diff tunable. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. The maximum load below which the secondary sibling threads will try to shed work onto the primary sibling thread's run queue. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Minimum system load above which idle secondary sibling threads will be considered for new work even when primary is not idle. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Minimum load above which secondary sibling threads will look for work among other run queues owned by CPUs within their S2 affinity domain during idle load balancing. It is recommended that this tunable parameter be never set to a value that is less than the value of sidle_S1runq_mload. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Minimum load above which secondary sibling threads will look for work among other run queues owned by CPUs within their S3 affinity domain during idle load balancing. It is recommended that this tunable parameter be never set to a value that is less than the value of sidle_S2runq_mload. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Minimum load above which secondary sibling threads will look for work on any local run queues. It is recommended that this tunable parameter be never set to a value that is less than the value of sidle_S3runq_mload. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Number of attempts for a simple lock before conferring. N/A. Minimum load difference between sibling run queue loads for a task to be stolen from the sibling's run queue. This is enabled only when the load is greater than the value for the search_smtrunq_mload tunable. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Amount of time in microseconds in idle loop without useful work before snoozing (calling h_cede). A value of -1 indicates to disable snoozing, a value of 0 indicates to snooze immediately. The maximum value of 100000000 corresponds to 100 seconds. Setting this parameter to 1 will weaken the barriers for migration of threads between MCMs under light load conditions. N/A. Setting this tunable to a value greater than -1 will enable the scheduler to enable and disable virtual processors based on the partition's CPU utilization. The value specified signifies the number of virtual processors to enable in addition to the virtual processors required to satisfy the workload.. Number of ticks to consider a thread busy for the purposes of optimisation for "thread_busy" load balancing. A value of 100 corresponds to 1 second. The values 10 and 1000 correspond to 0.1 and 10 seconds, respectively. Controls SMT-cores busy balancing. A value of 0 indicates balancing disabled. A value of 1 indicates balancing enabled only within MCMs (S2 groups). A value of 2 indicates fully enabled. Controls chipset busy balancing. A value of 0 indicates balancing disabled. A value of 1 indicates balancing enabled only within MCMs (S2 groups). A value of 2 indicates fully enabled. Specifies the utilization threshold for donation of a dedicated processor. In a dedicated processor partition that is enabled for donation, idle processor capacity can be donated to the shared processor pool for use by shared processor partitions. If a dedicated processor's utilization is less than this threshold, the dedicated processor will be donated for use by other partitions when the processor is idle. If a dedicated processor's utilization is equal to or greater than this threshold, the dedicated processor will not be donated for use by other partitions when the dedicated processor is idle. Controls the application of the virtual processor management feature of processor folding in a partition. The virtual processor management feature of processor folding can be enabled or disabled based on whether a partition is a shared processor partition or a dedicated processor partition. When processor folding is enabled via this tunable, the vpm_vxcpus tunable can be used to control processor folding. A value of 0 for this tunable indicates processor folding is disabled regardless of whether a partition is using shared processors or dedicated processors. A value of 1 indicates processor folding is enabled if shared processors are being used, but disabled if dedicated processors are being used. A value of 2 indicates processor folding is disabled if shared processors are being used, but enabled if dedicated processors are being used. A value of 3 indicates processor folding is enabled regardless of whether a partition is using shared processors or dedicated processors Specifies the Performance Monitor PM_CYC and software event sampling frequency multiplier as a means to control the trace sampling frequency. N/A. Specifies the Performance Monitor PM_* event sampling frequency multiplier as a means to control the trace sampling frequency. N/A. Specifies the minimum number of completed instructions between Performance Monitor event samples as a means to control the trace sampling frequency. N/A. Enables (1) or disables (0) fast locks. This parameter only applies to the 64bit kernel on POWER 6 or later and it is mutually exclusive of krlock_enabled. Enables (1) or disables (0) krlocks. It is mutually exclusive of fast_locks. Enables (1) or disables (0) krlocks. This parameter is mutually exclusive with fast_locks. Enables (1) or disables (0) fast locks. This parameter only applies to the 64bit kernel on POWER 6 or later. This parameter is mutually exclusive with krlock_enable. Enables (1) or disables (0) process scope disk statistics. Default: 1 (enabled). Range 0 to 1. Disabling process scope disk statistics improves performance when the statistics are not wanted. The virtual processor management feature of processor folding can be enabled or disabled based on whether a partition has shared or dedicated processors. In addition, when the partition is in static power saving mode, processor folding is automatically enabled for both shared or dedicated processor partitions. When processor folding is enabled, the vpm_vxcpus tunable can be used to control processor folding. There are 4 bits in vpm_fold_policy to control processor folding: - Bit 0 (0x1): When set to 1, this bit indicates processor folding is enabled if the partition is using shared processors. - Bit 1 (0x2): When set to 1, this bit indicates processor folding is enabled if the partition is using dedicated processors. - Bit 2 (0x4): When set to 1, this bit disables the automatic setting of processor folding when the partition is in static power saving mode. - Bit 3 (0x8): When set to 1, this bit indicates that processor affinity will be ignored while making folding decisions. These bit values can be or'ed to form the desired value. Amount of time in microseconds in idle loop on tertiary thread without useful work before snoozing (calling h_cede). SMT Scheduling Options. A value of zero indicates no options. Default: 1 (Idle shedding enabled). Load Barrier above which tertiary threads become active and below which tertiary threads shed work. It is recommended that this should not be set to a value less than search_primrunq_load. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Complex Lock Scalability Transition Level. The multiprogramming level at which complex locks transition for scalability. A value of 9999 disables the transition system wide. Enables (1) or disables (0) VMX/VSX. This flag has no effect on systems that do not support VMX/VSX. On systems that do have VMX/VSX support, disabling VMX/VSX makes the system behave as if it did not have a VMX/VSX unit, i.e trying to execute a VMX/VSX instruction results in an illegal operation exception. This feature can be useful for users who do not have applications using VMX/VSX, to prevent the memory allocator to force 16-byte boundary alignment by default on systems with VMX/VSX support instead of 8-byte alignment on systems without VMX/VSX. This results in applications using more memory when moved from a system without VMX/VSX to a system with VMX/VSX, potentially causing an existing application to fail on the new system. The virtual processor management feature of processor folding can be enabled or disabled based on whether a partition has shared or dedicated processors. In addition, when the partition is in static power saving mode, processor folding is automatically enabled for both shared or dedicated processor partitions. When processor folding is enabled, the vpm_xvcpus tunable can be used to control processor folding. There are 4 bits in vpm_fold_policy to control processor folding: - Bit 0 (0x1): When set to 1, this bit indicates processor folding is enabled if the partition is using shared processors. - Bit 1 (0x2): When set to 1, this bit indicates processor folding is enabled if the partition is using dedicated processors. - Bit 2 (0x4): When set to 1, this bit disables the automatic setting of processor folding when the partition is in static power saving mode. - Bit 3 (0x8): When set to 1, this bit indicates that processor affinity will be ignored while making folding decisions. These bit values can be or'ed to form the desired value. Changes the default waitlock policy in the pthread library. This policy determines the algorithm used at thread block and thread wakeup time when user mode locks and mutexes block. Only values 0 and 4 are supported.Enables (1) or disables (0) lock spin interrupt masking. Default: 1 (enabled). Range 0 to 1. Enabling lock spin interrupt masking can improve interrupt response time. Sets the scaled throughput mode for processor folding. The throughput mode determines the desired level of SMT exploitation on each virtual processor core before unfolding another core. A higher value will result in fewer cores being unfolded for a given workload. This increases scaled throughput at the expense of raw throughput. A value of zero disables this option in which case the default (raw throughput) mode will apply. A value of one enables raw throughput mode using the newer folding algorithm. Determines the number of cores to unfold in raw throughput mode before switching to scaled throughput mode. This parameter determines the number of cores that will be unfolded in raw throughput mode before switching to the specified scaled throughput mode. In raw throughput mode more cores are unfolded when only the primary thread of each unfolded core is busy. In scaled throughput mode, more cores are unfolded only when the desired level of SMT exploitation has been reached on the unfolded cores. This parameter can be used to achieve faster un- folding up to a desired number of cores before entering scaled throughput mode. A special value of zero can be specified to affect raw throughput mode up to the number of cores required to meet the entitled capacity of the partition. This parameter is only applicable when vpm_throughput_mode is non-zero. Enable timer migration when folding and unfolding processors. When enabled, timers that have been marked as migratable are moved to another processor when the owning processor is folded. Enabling this option improves the effectiveness of processor folding by reducing the timer load on folded cores. Enables (1) or disables (0) XRSET-aware load balancing. Default: 0 (disabled). Setting to values higher than 1 is restricted. This flag enables XRSET-aware load balancing in the dispatcher. The dispatcher will load balance workloads running on an XRSET and also load balance non-XRSET work taking into account the presence of XRSETs. This tunable is supported only in dedicated partitions and if static Power savings in not enabled. Further, the XRSETs must contain all the CPUs of a core and must not span SRADs. Enables the non-primary threads of the core to load balance within the core when unfolded loads are high. Non-primary threads of core will look for work within the core if the unfolded load (that is load ignoring the folded cores) of the system and SRAD are higher than their respective load barriers. Amount of time in microseconds in idle loop on secondary thread without useful work before snoozing (calling h_cede). Amount of time in microseconds in idle loop on quaternary thread without useful work before snoozing (calling h_cede). Load Barrier above which quaternary threads become active and below which quaternary threads shed work. It is recommended that this should not be set to a value less than tertiary_barrier_load. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Enable CPU resource set (RSET) load balancing. This parameter enables enhanced load balancing for RSETs. A value of 1 enables balancing for exclusive RSETs. A value of 2 enables balancing for regular RSETs. A value of 3 enables both. The number of 100ms intervals between each check for an RSET imbalance at the system level. This parameter determines the frequency at which the check is made for an RSET imbalance than spans SRADs. The number of intervals to wait before acting upon a system load imbalance. This parameter determines how quickly the system will act upon a cross-SRAD RSET load imbalance. This parameter times the interval determines the minimum amount of time between load balancing operations. The number of 100ms intervals between each check for an RSET imbalance at the SRAD level. This parameter determines the frequency at which the check is made for RSET imbalances within SRADs. Disables (0) or Enables (1) the use of pollset by poll() and select(). pollset mechanism helps to get around scalability problems with poll() & select(). By Enabling this tunable, poll() & select() will make use of pollset routines under the covers to improve the performance. Specifies the maximum number of file descriptors for which poll() will use pollset. If poll_intercept_max_fd is zero or is less than poll_intercept_min_fd, poll() calls will not use pollset. Specifies the minimum number of file descriptors for which poll() will use pollset. Specifies the minimum number of file descriptors for which poll() will use pollset. Specifies the maximum number of file descriptors for which select() will use pollset. If select_intercept_max_fd is zero or is less than select_intercept_min_fd, select() calls will not use pollset. Specifies the minimum number of file descriptors for which select() will use pollset. Specifies the minimum number of file descriptors for which select() will use pollset. Allows the use of bindprocessor to bind a process to an exclusive CPU. A value of 0 indicates strict xrsets. A value of 1 indicates weak xrsets. Enables (1) or Disables (0) high resolution timer for select/poll.Default: 0 (disabled). Range 0 to 1. Enable when the high resolution timer are required for select/poll. Enable boost latency dispatch policy. A value of 0 indicates no boost latency support. A value of 1 indicates boost latency support. This flag has no effect on systems that do not support VMX/VSX. On systems that do have VMX/VSX support, disabling VMX/VSX makes the system behave as if it did not have a VMX/VSX unit, i.e trying to execute a VMX/VSX instruction results in an illegal operation exception. This feature can be useful for users who do not have applications using VMX/VSX, to prevent the memory allocator to force 16-byte boundary alignment by default on systems with VMX/VSX support instead of 8-byte alignment on systems without VMX/VSX. This results in applications using more memory when moved from a system without VMX/VSX to a system with VMX/VSX, potentially causing an existing application to fail on the new system. Disabling VMX/VSX will also disable kernel-level crypto functions. krlock bitwise options 1-enabled(any other bit enables too) 2-limit to one owner 4-do not skip CPUs with pending interrupts 8-allow up to three shared owners 16-enable krlock spinning on DRW locks. This parameter is mutually exclusive with fast_locks. Define blocking policy of pollset_ctl(). A value of 0 indicates non-blocking policy for pollset_ctl(). A value of 1 indicates blocking policy for pollset_ctl(). Enable IO latency-sensitive thread dispatch policy. A value of 0 indicates no IO latency sensitive thread support. A value of 1 indicates IO latency sensitive thread support. Minimum load above which idle threads will look for work amoung other run queues owned by sibling CPUs within the core during idle load balancing. The maximum value of 4294967040 corresponds to 0xFFFFFF00 hexadecimal. Enable shedding dispatch policy. A value of 0 indicates shedding is disabled. A value of 1 indicates shedding is enabled. Specifies target utilization per core when VPM scaled throughput mode is enabled. Selects the target per core utilization. This represents the percentage of the core utilization for determining that a core is fully utilized for purposes of folding and unfolding a core. Specifies target load per core when VPM scaled throughput mode is enabled. Selects the target per core load. This represents the average core load for determining that a core is fully utilized for purposes of folding and unfolding a core. Specifies the threshold for unfolding a core when VPM is enabled and set to raw throughput mode. Selects the target threshold above which a core is unfolded. Higher values indicate a higher utilization per core must be attained before unfolding another core.