€cdocutils.nodes document q)q}q(U nametypesq}qX cloudwatchqNsUsubstitution_defsq}qUparse_messagesq ]q cdocutils.nodes system_message q )q }q (U rawsourceqUUparentqcdocutils.nodes section q)q}q(hUhhUsourceqXD/Users/kyleknap/Documents/GitHub/boto/docs/source/cloudwatch_tut.rstqUtagnameqUsectionqU attributesq}q(Udupnamesq]Uclassesq]Ubackrefsq]Uidsq]qU cloudwatchqaUnamesq]q hauUlineq!KUdocumentq"hUchildrenq#]q$(cdocutils.nodes title q%)q&}q'(hX CloudWatchq(hhhhhUtitleq)h}q*(h]h]h]h]h]uh!Kh"hh#]q+cdocutils.nodes Text q,X CloudWatchq-…q.}q/(hh(hh&ubaubcdocutils.nodes paragraph q0)q1}q2(hXFirst, make sure you have something to monitor. You can either create a LoadBalancer or enable monitoring on an existing EC2 instance. To enable monitoring, you can either call the monitor_instance method on the EC2Connection object or call the monitor method on the Instance object.q3hhhhhU paragraphq4h}q5(h]h]h]h]h]uh!Kh"hh#]q6h,XFirst, make sure you have something to monitor. You can either create a LoadBalancer or enable monitoring on an existing EC2 instance. To enable monitoring, you can either call the monitor_instance method on the EC2Connection object or call the monitor method on the Instance object.q7…q8}q9(hh3hh1ubaubh0)q:}q;(hXbIt takes a while for the monitoring data to start accumulating but once it does, you can do this::hhhhhh4h}q<(h]h]h]h]h]uh!K h"hh#]q=h,XaIt takes a while for the monitoring data to start accumulating but once it does, you can do this:q>…q?}q@(hXaIt takes a while for the monitoring data to start accumulating but once it does, you can do this:hh:ubaubcdocutils.nodes literal_block qA)qB}qC(hXè>>> import boto.ec2.cloudwatch >>> c = boto.ec2.cloudwatch.connect_to_region('us-west-2') >>> metrics = c.list_metrics() >>> metrics [Metric:DiskReadBytes, Metric:CPUUtilization, Metric:DiskWriteOps, Metric:DiskWriteOps, Metric:DiskReadOps, Metric:DiskReadBytes, Metric:DiskReadOps, Metric:CPUUtilization, Metric:DiskWriteOps, Metric:NetworkIn, Metric:NetworkOut, Metric:NetworkIn, Metric:DiskReadBytes, Metric:DiskWriteBytes, Metric:DiskWriteBytes, Metric:NetworkIn, Metric:NetworkIn, Metric:NetworkOut, Metric:NetworkOut, Metric:DiskReadOps, Metric:CPUUtilization, Metric:DiskReadOps, Metric:CPUUtilization, Metric:DiskWriteBytes, Metric:DiskWriteBytes, Metric:DiskReadBytes, Metric:NetworkOut, Metric:DiskWriteOps]hhhhhU literal_blockqDh}qE(U xml:spaceqFUpreserveqGh]h]h]h]h]uh!Kh"hh#]qHh,Xè>>> import boto.ec2.cloudwatch >>> c = boto.ec2.cloudwatch.connect_to_region('us-west-2') >>> metrics = c.list_metrics() >>> metrics [Metric:DiskReadBytes, Metric:CPUUtilization, Metric:DiskWriteOps, Metric:DiskWriteOps, Metric:DiskReadOps, Metric:DiskReadBytes, Metric:DiskReadOps, Metric:CPUUtilization, Metric:DiskWriteOps, Metric:NetworkIn, Metric:NetworkOut, Metric:NetworkIn, Metric:DiskReadBytes, Metric:DiskWriteBytes, Metric:DiskWriteBytes, Metric:NetworkIn, Metric:NetworkIn, Metric:NetworkOut, Metric:NetworkOut, Metric:DiskReadOps, Metric:CPUUtilization, Metric:DiskReadOps, Metric:CPUUtilization, Metric:DiskWriteBytes, Metric:DiskWriteBytes, Metric:DiskReadBytes, Metric:NetworkOut, Metric:DiskWriteOps]qI…qJ}qK(hUhhBubaubh0)qL}qM(hXyThe list_metrics call will return a list of all of the available metrics that you can query against. Each entry in the list is a Metric object. As you can see from the list above, some of the metrics are repeated. The repeated metrics are across different dimensions (per-instance, per-image type, per instance type) which can identified by looking at the dimensions property.qNhhhhhh4h}qO(h]h]h]h]h]uh!K1h"hh#]qPh,XyThe list_metrics call will return a list of all of the available metrics that you can query against. Each entry in the list is a Metric object. As you can see from the list above, some of the metrics are repeated. The repeated metrics are across different dimensions (per-instance, per-image type, per instance type) which can identified by looking at the dimensions property.qQ…qR}qS(hhNhhLubaubh0)qT}qU(hX"Because for this example, I'm only monitoring a single instance, the set of metrics available to me are fairly limited. If I was monitoring many instances, using many different instance types and AMI's and also several load balancers, the list of available metrics would grow considerably.qVhhhhhh4h}qW(h]h]h]h]h]uh!K5h"hh#]qXh,X"Because for this example, I'm only monitoring a single instance, the set of metrics available to me are fairly limited. If I was monitoring many instances, using many different instance types and AMI's and also several load balancers, the list of available metrics would grow considerably.qY…qZ}q[(hhVhhTubaubh0)q\}q](hX¬Once you have the list of available metrics, you can actually query the CloudWatch system for that metric. Let's choose the CPU utilization metric for one of the ImageID.::hhhhhh4h}q^(h]h]h]h]h]uh!K:h"hh#]q_h,X«Once you have the list of available metrics, you can actually query the CloudWatch system for that metric. Let's choose the CPU utilization metric for one of the ImageID.:q`…qa}qb(hX«Once you have the list of available metrics, you can actually query the CloudWatch system for that metric. Let's choose the CPU utilization metric for one of the ImageID.:hh\ubaubhA)qc}qd(hXq>>> m_image = metrics[7] >>> m_image Metric:CPUUtilization >>> m_image.dimensions {u'ImageId': [u'ami-6ac2a85a']}hhhhhhDh}qe(hFhGh]h]h]h]h]uh!K=h"hh#]qfh,Xq>>> m_image = metrics[7] >>> m_image Metric:CPUUtilization >>> m_image.dimensions {u'ImageId': [u'ami-6ac2a85a']}qg…qh}qi(hUhhcubaubh0)qj}qk(hX?Let's choose another CPU utilization metric for our instance.::qlhhhhhh4h}qm(h]h]h]h]h]uh!KCh"hh#]qnh,X>Let's choose another CPU utilization metric for our instance.:qo…qp}qq(hX>Let's choose another CPU utilization metric for our instance.:hhjubaubhA)qr}qs(hXa>>> m = metrics[20] >>> m Metric:CPUUtilization >>> m.dimensions {u'InstanceId': [u'i-4ca81747']}hhhhhhDh}qt(hFhGh]h]h]h]h]uh!KEh"hh#]quh,Xa>>> m = metrics[20] >>> m Metric:CPUUtilization >>> m.dimensions {u'InstanceId': [u'i-4ca81747']}qv…qw}qx(hUhhrubaubh0)qy}qz(hX)The Metric object has a query method that lets us actually perform the query against the collected data in CloudWatch. To call that, we need a start time and end time to control the time span of data that we are interested in. For this example, let's say we want the data for the previous hour::hhhhhh4h}q{(h]h]h]h]h]uh!KKh"hh#]q|h,X(The Metric object has a query method that lets us actually perform the query against the collected data in CloudWatch. To call that, we need a start time and end time to control the time span of data that we are interested in. For this example, let's say we want the data for the previous hour:q}…q~}q(hX(The Metric object has a query method that lets us actually perform the query against the collected data in CloudWatch. To call that, we need a start time and end time to control the time span of data that we are interested in. For this example, let's say we want the data for the previous hour:hhyubaubhA)q€}q(hXf>>> import datetime >>> end = datetime.datetime.utcnow() >>> start = end - datetime.timedelta(hours=1)hhhhhhDh}q‚(hFhGh]h]h]h]h]uh!KQh"hh#]qƒh,Xf>>> import datetime >>> end = datetime.datetime.utcnow() >>> start = end - datetime.timedelta(hours=1)q„…q…}q†(hUhh€ubaubh0)q‡}qˆ(hXŒWe also need to supply the Statistic that we want reported and the Units to use for the results. The Statistic can be one of these values::hhhhhh4h}q‰(h]h]h]h]h]uh!KUh"hh#]qŠh,X‹We also need to supply the Statistic that we want reported and the Units to use for the results. The Statistic can be one of these values:q‹…qŒ}q(hX‹We also need to supply the Statistic that we want reported and the Units to use for the results. The Statistic can be one of these values:hh‡ubaubhA)qŽ}q(hX7['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount']hhhhhhDh}q(hFhGh]h]h]h]h]uh!KYh"hh#]q‘h,X7['Minimum', 'Maximum', 'Sum', 'Average', 'SampleCount']q’…q“}q”(hUhhŽubaubh0)q•}q–(hX(And Units must be one of the following::q—hhhhhh4h}q˜(h]h]h]h]h]uh!K[h"hh#]q™h,X'And Units must be one of the following:qš…q›}qœ(hX'And Units must be one of the following:hh•ubaubhA)q}qž(hX…['Seconds', 'Microseconds', 'Milliseconds', 'Bytes', 'Kilobytes', 'Megabytes', 'Gigabytes', 'Terabytes', 'Bits', 'Kilobits', 'Megabits', 'Gigabits', 'Terabits', 'Percent', 'Count', 'Bytes/Second', 'Kilobytes/Second', 'Megabytes/Second', 'Gigabytes/Second', 'Terabytes/Second', 'Bits/Second', 'Kilobits/Second', 'Megabits/Second', 'Gigabits/Second', 'Terabits/Second', 'Count/Second', None]hhhhhhDh}qŸ(hFhGh]h]h]h]h]uh!K]h"hh#]q h,X…['Seconds', 'Microseconds', 'Milliseconds', 'Bytes', 'Kilobytes', 'Megabytes', 'Gigabytes', 'Terabytes', 'Bits', 'Kilobits', 'Megabits', 'Gigabits', 'Terabits', 'Percent', 'Count', 'Bytes/Second', 'Kilobytes/Second', 'Megabytes/Second', 'Gigabytes/Second', 'Terabytes/Second', 'Bits/Second', 'Kilobits/Second', 'Megabits/Second', 'Gigabits/Second', 'Terabits/Second', 'Count/Second', None]q¡…q¢}q£(hUhhubaubh0)q¤}q¥(hXThe query method also takes an optional parameter, period. This parameter controls the granularity (in seconds) of the data returned. The smallest period is 60 seconds and the value must be a multiple of 60 seconds. So, let's ask for the average as a percent::hhhhhh4h}q¦(h]h]h]h]h]uh!K_h"hh#]q§h,XThe query method also takes an optional parameter, period. This parameter controls the granularity (in seconds) of the data returned. The smallest period is 60 seconds and the value must be a multiple of 60 seconds. So, let's ask for the average as a percent:q¨…q©}qª(hXThe query method also takes an optional parameter, period. This parameter controls the granularity (in seconds) of the data returned. The smallest period is 60 seconds and the value must be a multiple of 60 seconds. So, let's ask for the average as a percent:hh¤ubaubhA)q«}q¬(hXQ>>> datapoints = m.query(start, end, 'Average', 'Percent') >>> len(datapoints) 60hhhhhhDh}q­(hFhGh]h]h]h]h]uh!Kdh"hh#]q®h,XQ>>> datapoints = m.query(start, end, 'Average', 'Percent') >>> len(datapoints) 60q¯…q°}q±(hUhh«ubaubh0)q²}q³(hXKOur period was 60 seconds and our duration was one hour so we should get 60 data points back and we can see that we did. Each element in the datapoints list is a DataPoint object which is a simple subclass of a Python dict object. Each Datapoint object contains all of the information available about that particular data point.::hhhhhh4h}q´(h]h]h]h]h]uh!Khh"hh#]qµh,XJOur period was 60 seconds and our duration was one hour so we should get 60 data points back and we can see that we did. Each element in the datapoints list is a DataPoint object which is a simple subclass of a Python dict object. Each Datapoint object contains all of the information available about that particular data point.:q¶…q·}q¸(hXJOur period was 60 seconds and our duration was one hour so we should get 60 data points back and we can see that we did. Each element in the datapoints list is a DataPoint object which is a simple subclass of a Python dict object. Each Datapoint object contains all of the information available about that particular data point.:hh²ubaubhA)q¹}qº(hX{>>> d = datapoints[0] >>> d {u'Timestamp': datetime.datetime(2014, 6, 23, 22, 25), u'Average': 20.0, u'Unit': u'Percent'}hhhhhhDh}q»(hFhGh]h]h]h]h]uh!Koh"hh#]q¼h,X{>>> d = datapoints[0] >>> d {u'Timestamp': datetime.datetime(2014, 6, 23, 22, 25), u'Average': 20.0, u'Unit': u'Percent'}q½…q¾}q¿(hUhh¹ubaubh0)qÀ}qÁ(hX.My server obviously isn't very busy right now!qÂhhhhhh4h}qÃ(h]h]h]h]h]uh!Kuh"hh#]qÄh,X.My server obviously isn't very busy right now!qÅ…qÆ}qÇ(hhÂhhÀubaubeubhhhUsystem_messageqÈh}qÉ(h]UlevelKh]h]Usourcehh]h]UlineK=UtypeUERRORqÊuh!KˆU raw_enabledr?KU dump_settingsr@NubUsymbol_footnote_startrAKUidsrB}rChhsUsubstitution_namesrD}rEhh"h}rF(h]h]h]Usourcehh]h]uU footnotesrG]rHUrefidsrI}rJub.