* IBM_PROLOG_BEGIN_TAG * This is an automatically generated prolog. * * bos720 src/bos/usr/ccs/lib/libperfstat/README.WPAR 1.3 * * * * OBJECT CODE ONLY SOURCE MATERIALS * * COPYRIGHT International Business Machines Corp. 2007 * All Rights Reserved * * The source code for this program is not published or otherwise * divested of its trade secrets, irrespective of what has been * deposited with the U.S. Copyright Office. * * IBM_PROLOG_END_TAG Perfstat API updates for Workload Partitions --------------------------------------------- Perfstat has two types of APIs. Global types return global metrics related to a set of components, while individual types return metrics related to individual components. This document covers the APIs behavior when it is called from a process that is running inside a WPAR and also it covers the documentation of newly introduced APIs to retrieve metrics related to a set of components in WPAR context. The following table lists the APIs behavior inside WPAR: **Note: LPAR represents Logical Partition. WPAR represents Workload Partition. A LPAR can contain one or more WPARs. ------------------------------------------------------------------------------- |API | Behavior inside WPAR | |------------------------|----------------------------------------------------| |perfstat_cpu_total |Returns parent LPAR CPU total metrics. | |------------------------|----------------------------------------------------| |perfstat_memory_total |Returns parent LPAR Memory total metrics. | |------------------------|----------------------------------------------------| |perfstat_partition_total|Returns the parent LPAR partition total metrics | |------------------------|----------------------------------------------------| |perfstat_cpu |Returns the parent LPAR individual CPU metrics | |------------------------|----------------------------------------------------| |perfstat_disk_* |Disks not supported inside WPAR, hence the APIs will| | |fail gracefully when called from WPAR. | |------------------------|----------------------------------------------------| |perfstat_netinterface_* |Returns the calling WPAR network interface metrics. | |------------------------|----------------------------------------------------| |perfstat_pagingspace |Not supported inside WPAR, hence the APIs will fail | | |gracefully when called from WPAR. | |------------------------|----------------------------------------------------| |perfstat_netbuffer |Returns the calling WPAR network buffer metrics. | |------------------------|----------------------------------------------------| |perfstat_protocol |Returns the calling WPAR protocol metrics. | ------------------------------------------------------------------------------- Following are the new APIs that can be used to retrieve WPAR related total metrics for CPU, Memory or Partition. WPAR Interfaces --------------- WPAR interfaces report metrics related to a set of components on a WPAR (such as CPU, or memory). All of the following WPAR interfaces use the naming convention perfstat_subsystem_total_wpar, and use a common signature: perfstat_cpu_total_wpar Retrieves WPAR total CPU usage metrics perfstat_memory_total_wpar Retrieves WPAR total memory usage metrics perfstat_wpar_total Retrieves WPAR partition metrics The common signature used by all of the WPAR interfaces is as follows: int perfstat_subsystem_total_wpar(perfstat_id_wpar_t *name, perfstat_subsystem_total_wpar_t *userbuff, int sizeof_struct, int desired_number); perfstat_id_wpar_t is defined as follows: typedef struct { /* WPAR identifier */ wparid_specifier spec; /* Specifier to choose wpar id or name */ union { cid_t wpar_id; /* WPAR ID */ char wparname[MAXCORRALNAMELEN+1]; /* WPAR NAME */ } u; char name[IDENTIFIER_LENGTH]; /* name of the structure element identifier */ } perfstat_id_wpar_t; The usage of the parameters for all of the interfaces is as follows: perfstat_id_wpar_t *name Should be NULL when called inside WPAR. When called from GLOBAL WPAR, either wpar_id or wparname should be specified. **Note: 'name' field in perfstat_id_wpar_t is reserved for future use and should be NULL always. perfstat_subsystem_total_wpar_t *userbuff A pointer to a memory area with enough space for the returned structure int sizeof_struct Should be set to sizeof(perfstat_subsystem_total_wpar_t) int desired_number Reserved for future use, must be set to 0 or 1 The return value will be -1 in case of errors. Otherwise, the number of structures copied is returned. This is always 1. An exception to this scheme is: when name=NULL, userbuff=NULL and desired_number=0, the total number of structures available is returned. This is always 1. The following sections provide examples of the type of data returned and code using each of the interfaces. perfstat_cpu_total_wpar Interface --------------------------------- The perfstat_cpu_total_wpar function returns a perfstat_cpu_total_wpar_t structure, which is defined in the libperfstat.h file. Selected fields from the perfstat_cpu_total_wpar_t structure include: processorHz Processor speed in Hertz description Processor type ncpus Current number of active CPUs in parent LPAR puser Total number of physical processor ticks spent in user mode psys Total number of physical processor ticks spent in system (kernel) mode pidle Total number of physical processor ticks spent idle with no I/O pending pwait Total number of physical processor ticks spent idle with I/O pending Several other processor-related counters (such as number of fork calls, process switches and load average) are also returned. For a complete list, see the perfstat_cpu_total_wpar_t section of the libperfstat.h header file The following code shows an example of how perfstat_cpu_total_wpar is used: #include #include #include #include #include #include int main(int argc, char *argv[]) { int i; perfstat_cpu_total_t cpu_total_buffer; perfstat_cpu_total_wpar_t *cpu_total_wpar_buffer; perfstat_id_wpar_t wparid; unsigned long long delt_tot, delt_user, delt_sys, delt_idle, delt_wait; int rc, total_wpars = 0; corrallist_t *wpar_list = NULL; cid_t cid = corral_getcid(); unsigned long long last_user, last_sys, last_idle, last_wait, *last_wpar_user, *last_wpar_sys; /* Check whether we are running inside WPAR */ if (cid) { total_wpars = 1; } else { /* Get the number of WPARs in the System */ rc = getcorrallist(NULL, &total_wpars); if ((rc == -1) && (errno != ERANGE) && (errno != ENOSPC)) { perror("getcorrallist"); exit(1); } } /* If WPARs exists in the system, get the WPAR details */ if (total_wpars > 0) { /* Obtain the list of WPARs */ wpar_list = (corrallist_t*) malloc(total_wpars * sizeof(corrallist_t)); if (wpar_list == NULL) { perror("malloc"); exit (1); } if (!cid) { rc = getcorrallist(wpar_list, &total_wpars); if(rc != 0) { perror ("getcorrallist():"); exit (1); } } else { wpar_list[0].cid = cid; strcpy(wpar_list[0].cname, "Local\0"); } last_wpar_user = (unsigned long long *) malloc(total_wpars * sizeof (unsigned long long)); if (last_wpar_user == NULL) { perror("malloc"); exit(1); } last_wpar_sys = (unsigned long long *) malloc(total_wpars * sizeof (unsigned long long)); if (last_wpar_sys == NULL) { perror("malloc"); exit(1); } cpu_total_wpar_buffer = (perfstat_cpu_total_wpar_t *) malloc(total_wpars * sizeof(perfstat_cpu_total_wpar_t)); if (cpu_total_wpar_buffer == NULL) { perror("malloc"); exit(1); } /* Obtain WPAR Data */ bzero(&wparid, sizeof(perfstat_id_wpar_t)); wparid.spec = WPARID; for(i=0; i < total_wpars; i++) { wparid.u.wpar_id = wpar_list[i].cid; perfstat_cpu_total_wpar((cid)?NULL:&wparid, &cpu_total_wpar_buffer[i], sizeof(perfstat_cpu_total_wpar_t), 1); } } /* get LPAR data */ perfstat_cpu_total(NULL, &cpu_total_buffer, sizeof(perfstat_cpu_total_t), 1); /* print LPAR general processor information */ printf("LPAR Configuration:\n"); printf("\tProcessors: (%d:%d) %s running at %llu MHz\n", cpu_total_buffer.ncpus, cpu_total_buffer.ncpus_cfg, cpu_total_buffer.description, cpu_total_buffer.processorHZ/1000000); /* print WPAR general processor information */ printf("WPARs Configuration:\n"); for(i=0; i < total_wpars; i++) { printf("\tWPAR %s, Processor %s running at %llu Mhz\n", wpar_list[i].cname, cpu_total_wpar_buffer[i].description, cpu_total_wpar_buffer[i].processorHZ/1000000); } /* save values for delta calculations */ last_user = cpu_total_buffer.puser; last_sys = cpu_total_buffer.psys; last_idle = cpu_total_buffer.pidle; last_wait = cpu_total_buffer.pwait; for(i=0; i < total_wpars; i++) { last_wpar_user[i] = cpu_total_wpar_buffer[i].puser; last_wpar_sys[i] = cpu_total_wpar_buffer[i].psys; } printf("\n%-18s User Sys Idle Wait LoadAvg\n", "WPAR\0"); while(1 == 1) { sleep(2); /* get new values after two second */ perfstat_cpu_total(NULL, &cpu_total_buffer, sizeof(perfstat_cpu_total_t), 1); for(i=0; i < total_wpars; i++) { wparid.u.wpar_id = wpar_list[i].cid; perfstat_cpu_total_wpar((cid)?NULL:&wparid, &cpu_total_wpar_buffer[i], sizeof(perfstat_cpu_total_wpar_t), 1); } /* calculate current total number of ticks */ delt_user = cpu_total_buffer.puser - last_user; delt_sys = cpu_total_buffer.psys - last_sys; delt_idle = cpu_total_buffer.pidle - last_idle; delt_wait = cpu_total_buffer.pwait - last_wait; delt_tot = delt_user + delt_sys + delt_idle + delt_wait; /* print percentages, total delta ticks and tick rate per cpu per sec */ printf("%-18s %#5.1f %#5.1f %#5.1f %#5.1f %#5.1f\n", "SYSTEM\0", 100.0 * (double) delt_user / (double) delt_tot, 100.0 * (double) delt_sys / (double) delt_tot, 100.0 * (double) delt_idle / (double) delt_tot, 100.0 * (double) delt_wait / (double) delt_tot, (double) cpu_total_buffer.loadavg[0] / (double) (1< #include #include #include #include #include int main(int argc, char *argv[]) { int i; perfstat_memory_total_t minfo; perfstat_memory_total_wpar_t *minfo_wpar; perfstat_id_wpar_t wparid; int rc, total_wpars = 0; corrallist_t *wpar_list = NULL; cid_t cid = corral_getcid(); /* Check whether we are running inside WPAR */ if (cid) { total_wpars = 1; } else { /* Get the number of WPARs in the System */ rc = getcorrallist(NULL, &total_wpars); if ((rc == -1) && (errno != ERANGE) && (errno != ENOSPC)) { perror("getcorrallist"); exit(1); } } /* If WPARs exists in the system, get the WPAR details */ if (total_wpars > 0){ /* Obtain the list of WPARs */ wpar_list = (corrallist_t*) malloc(total_wpars * sizeof(corrallist_t)); if (wpar_list == NULL) { perror("malloc"); exit (1); } if (!cid) { rc = getcorrallist(wpar_list, &total_wpars); if(rc != 0) { perror ("getcorrallist():"); exit (1); } } else { wpar_list[0].cid = cid; strcpy(wpar_list[0].cname, "Local\0"); } minfo_wpar = (perfstat_memory_total_wpar_t *) malloc(total_wpars * sizeof(perfstat_memory_total_wpar_t)); if (minfo_wpar == NULL) { perror("malloc"); exit(1); } /* Obtain WPAR Data */ bzero(&wparid, sizeof(perfstat_id_wpar_t)); wparid.spec = WPARID; for(i=0; i < total_wpars; i++) { wparid.u.wpar_id = wpar_list[i].cid; perfstat_memory_total_wpar((cid)?NULL:&wparid, &minfo_wpar[i], sizeof(perfstat_memory_total_wpar_t), 1); } } /* get LPAR data */ perfstat_memory_total(NULL, &minfo, sizeof(perfstat_memory_total_t), 1); printf("%18s Memory statistics\n", "SYSTEM"); printf("-----------------------------------\n"); printf("real memory size : %llu MB\n", minfo.real_total*4096/1024/1024); printf("reserved paging space : %llu MB\n",minfo.pgsp_rsvd); printf("virtual memory size : %llu MB\n", minfo.virt_total*4096/1024/1024); printf("number of free pages : %llu\n",minfo.real_free); printf("number of pinned pages : %llu\n",minfo.real_pinned); printf("number of pages in file cache : %llu\n",minfo.numperm); printf("total paging space pages : %llu\n",minfo.pgsp_total); printf("free paging space pages : %llu\n", minfo.pgsp_free); printf("used paging space : %3.2f%%\n", (float)(minfo.pgsp_total-minfo.pgsp_free)*100.0/ (float)minfo.pgsp_total); printf("number of paging space page ins : %llu\n",minfo.pgspins); printf("number of paging space page outs : %llu\n",minfo.pgspouts); printf("number of page ins : %llu\n",minfo.pgins); printf("number of page outs : %llu\n",minfo.pgouts); for(i=0; i < total_wpars; i++) { printf("%18s Memory statistics\n", wpar_list[i].cname); printf("-----------------------------------\n"); printf("number of pinned pages : %llu\n",minfo_wpar[i].real_pinned); printf("number of pages in file cache : %llu\n",minfo_wpar[i].numperm); printf("number of paging space page ins : %llu\n",minfo_wpar[i].pgspins); printf("number of paging space page outs : %llu\n",minfo_wpar[i].pgspouts); printf("number of page ins : %llu\n",minfo_wpar[i].pgins); printf("number of page outs : %llu\n",minfo_wpar[i].pgouts); } } The preceding program produces output similar to the following: =============================================================================== SYSTEM Memory statistics ----------------------------------- real memory size : 2048 MB reserved paging space : 512 MB virtual memory size : 2560 MB number of free pages : 307915 number of pinned pages : 101182 number of pages in file cache : 29043 total paging space pages : 131072 free paging space pages : 129574 used paging space : 1.14% number of paging space page ins : 0 number of paging space page outs : 0 number of page ins : 52077 number of page outs : 42395 wpar1 Memory statistics ----------------------------------- number of pinned pages : 90 number of pages in file cache : 809 number of paging space page ins : 0 number of paging space page outs : 0 number of page ins : 2061 number of page outs : 6781 wpar2 Memory statistics ----------------------------------- number of pinned pages : 80 number of pages in file cache : 867 number of paging space page ins : 0 number of paging space page outs : 0 number of page ins : 1653 number of page outs : 6406 wpar3 Memory statistics ----------------------------------- number of pinned pages : 80 number of pages in file cache : 867 number of paging space page ins : 0 number of paging space page outs : 0 number of page ins : 1653 number of page outs : 6415 wpar4 Memory statistics ----------------------------------- number of pinned pages : 144 number of pages in file cache : 867 number of paging space page ins : 0 number of paging space page outs : 0 number of page ins : 1661 number of page outs : 6411 =============================================================================== perfstat_wpar_total Interface ----------------------------- The perfstat_wpar_total function returns a perfstat_wpar_total_t structure, which is defined in the libperfstat.h file. Selected fields from the perfstat_wpar_total_t structure include: type WPAR type online_cpus Number of virtual CPUs currently allocated to the WPAR if resource set is used otherwise Number of virtual CPUs currently allocated to the parent LPAR online_memory Amount of memory currently allocated to the parent LPAR cpu_limit CPU Limit in 100ths of % mem_limit Memory limit in 100ths of % For a complete list, see the perfstat_wpar_total_t section in the libperfstat.h header file. The following code shows examples of how to use the perfstat_wpar_total function. #include #include #include #include #include #include int main(int argc, char *argv[]) { int i; perfstat_partition_total_t pinfo; perfstat_wpar_total_t *winfo; perfstat_id_wpar_t wparid; int rc, total_wpars = 0; corrallist_t *wpar_list = NULL; cid_t cid = corral_getcid(); /* Check whether we are running inside WPAR */ if (cid) { total_wpars = 1; } else { /* Get the number of WPARs in the System */ rc = getcorrallist(NULL, &total_wpars); if ((rc == -1) && (errno != ERANGE) && (errno != ENOSPC)) { perror("getcorrallist"); exit(1); } } /* If WPARs exists in the system, get the WPAR details */ if (total_wpars > 0){ /* Obtain the list of WPARs */ wpar_list = (corrallist_t*) malloc(total_wpars * sizeof(corrallist_t)); if (wpar_list == NULL) { perror("malloc"); exit (1); } if (!cid) { rc = getcorrallist(wpar_list, &total_wpars); if(rc != 0) { perror ("getcorrallist():"); exit (1); } } else { wpar_list[0].cid = cid; strcpy(wpar_list[0].cname, "Local\0"); } winfo = (perfstat_wpar_total_t *) malloc(total_wpars * sizeof(perfstat_wpar_total_t)); if (winfo == NULL) { perror("malloc"); exit(1); } /* Obtain WPAR Data */ bzero(&wparid, sizeof(perfstat_id_wpar_t)); wparid.spec = WPARID; for(i=0; i < total_wpars; i++) { wparid.u.wpar_id = wpar_list[i].cid; perfstat_wpar_total((cid)?NULL:&wparid, &winfo[i], sizeof(perfstat_wpar_total_t), 1); } } /* get LPAR data */ perfstat_partition_total(NULL, &pinfo, sizeof(perfstat_partition_total_t), 1); printf("\nPartition Name : %s\n", pinfo.name); printf("Partition Number : %u\n", pinfo.lpar_id); printf("Type : %s\n", pinfo.type.b.shared_enabled ? "Shared" : "Dedicated"); printf("Mode : %s\n", pinfo.type.b.capped ? "Capped" : "Uncapped"); printf("Entitled Capacity : %u\n", pinfo.entitled_proc_capacity); printf("Partition Group-ID : %u\n", pinfo.group_id); printf("Shared Pool ID : %u\n", pinfo.pool_id); printf("Online Virtual CPUs : %u\n", pinfo.online_cpus); printf("Maximum Virtual CPUs : %u\n", pinfo.max_cpus); printf("Minimum Virtual CPUs : %u\n", pinfo.min_cpus); printf("Online Memory : %llu MB\n", pinfo.online_memory); printf("Maximum Memory : %llu MB\n", pinfo.max_memory); printf("Minimum Memory : %llu MB\n", pinfo.min_memory); printf("Variable Capacity Weight : %u\n", pinfo.var_proc_capacity_weight); printf("Minimum Capacity : %u\n", pinfo.min_proc_capacity); printf("Maximum Capacity : %u\n", pinfo.max_proc_capacity); printf("Capacity Increment : %u\n", pinfo.proc_capacity_increment); printf("Maximum Physical CPUs in system: %u\n", pinfo.max_phys_cpus_sys); printf("Active Physical CPUs in system : %u\n", pinfo.online_phys_cpus_sys); printf("Active CPUs in Pool : %u\n", pinfo.phys_cpus_pool); printf("Unallocated Capacity : %u\n", pinfo.unalloc_proc_capacity); printf("Physical CPU Percentage : %4.2f%%\n", (double)pinfo.entitled_proc_capacity / (double)pinfo.online_cpus); printf("Unallocated Weight : %u\n", pinfo.unalloc_var_proc_capacity_weight); for(i=0; i < total_wpars; i++) { char res[80]; res[0] = '\0'; if (winfo[i].type.b.cpu_limits) strcat(res,"CPU:\0"); if (winfo[i].type.b.mem_limits) strcat(res,"MEM:\0"); if (winfo[i].type.b.cpu_rset) strcat(res,"RSET:\0"); wparid.u.wpar_id = wpar_list[i].cid; printf("\nWPAR Name : %s\n", winfo[i].name); printf("WPAR ID : %u\n", winfo[i].wpar_id); printf("Type : %s\n", winfo[i].type.b.app_wpar ? "Application" : "System"); printf("Resource Enforcement : %s\n", res); printf("Maximum Capacity : %u\n", (unsigned long)(winfo[i].entitled_proc_capacity * (winfo[i].cpu_limit/10000.0))); printf("Maximum Memory : %llu MB\n", (unsigned long long)(winfo[i].online_memory * (winfo[i].mem_limit/10000.0))); printf("Maximum Physical CPU Percentage: %4.2f%%\n", ((double)winfo[i].entitled_proc_capacity * (winfo[i].cpu_limit/10000.0)) / (double)pinfo.online_cpus); } } The program above produces output similar to the following: =============================================================================== Partition Name : ses12 Partition Number : 6 Type : Dedicated Mode : Capped Entitled Capacity : 100 Partition Group-ID : 32774 Shared Pool ID : 4294967295 Online Virtual CPUs : 1 Maximum Virtual CPUs : 8 Minimum Virtual CPUs : 1 Online Memory : 2048 MB Maximum Memory : 4096 MB Minimum Memory : 256 MB Variable Capacity Weight : 0 Minimum Capacity : 100 Maximum Capacity : 800 Capacity Increment : 100 Maximum Physical CPUs in system: 8 Active Physical CPUs in system : 8 Active CPUs in Pool : 0 Unallocated Capacity : 0 Physical CPU Percentage : 100.00% Unallocated Weight : 0 WPAR Name : wpar1 WPAR ID : 1 Type : System Resource Enforcement : CPU:MEM: Maximum Capacity : 17 Maximum Memory : 348 MB Maximum Physical CPU Percentage: 17.00% ============================================================================= Please refer /usr/samples/libperfstat/simplewparstat.c for calculating CPU and memory utilisation of WPARs.