cdocutils.nodes document q)q}q(U nametypesq}q(Xaccessing a bucketqNXs3_tutqXdeleting a bucketqNX&an introduction to boto's s3 interfaceq NXcreating a connectionq NXstoring large dataq NXlisting all available bucketsq NX%creating a bucket in another locationq NXcreating a bucketqNX7setting/getting/deleting cors configuration on a bucketqNX>setting / getting the access control list for buckets and keysqNX.setting/getting metadata values on key objectsqNXrestoring objects from glacierqNX storing dataqNX transitioning objects to glacierqNuUsubstitution_defsq}qUparse_messagesq]qUcurrent_sourceqNU decorationqNUautofootnote_startqKUnameidsq}q(hUaccessing-a-bucketqhUs3-tutqhUdeleting-a-bucketq h U&an-introduction-to-boto-s-s3-interfaceq!h Ucreating-a-connectionq"h Ustoring-large-dataq#h Ulisting-all-available-bucketsq$h U%creating-a-bucket-in-another-locationq%hUcreating-a-bucketq&hU7setting-getting-deleting-cors-configuration-on-a-bucketq'hU]Urefidq?huUlineq@KUdocumentqAhh-]ubcdocutils.nodes section qB)qC}qD(h2Uh3hh4h5Uexpect_referenced_by_nameqE}qFhh0sh6UsectionqGh8}qH(h<]h=]h;]h:]qI(h!heh>]qJ(h heuh@KhAhUexpect_referenced_by_idqK}qLhh0sh-]qM(cdocutils.nodes title qN)qO}qP(h2X&An Introduction to boto's S3 interfaceqQh3hCh4h5h6UtitleqRh8}qS(h<]h=]h;]h:]h>]uh@KhAhh-]qTcdocutils.nodes Text qUX&An Introduction to boto's S3 interfaceqVqW}qX(h2hQh3hOubaubcdocutils.nodes paragraph qY)qZ}q[(h2XThis tutorial focuses on the boto interface to the Simple Storage Service from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto.q\h3hCh4h5h6U paragraphq]h8}q^(h<]h=]h;]h:]h>]uh@KhAhh-]q_hUXThis tutorial focuses on the boto interface to the Simple Storage Service from Amazon Web Services. This tutorial assumes that you have already downloaded and installed boto.q`qa}qb(h2h\h3hZubaubhB)qc}qd(h2Uh3hCh4h5h6hGh8}qe(h<]h=]h;]h:]qfh"ah>]qgh auh@K hAhh-]qh(hN)qi}qj(h2XCreating a Connectionqkh3hch4h5h6hRh8}ql(h<]h=]h;]h:]h>]uh@K hAhh-]qmhUXCreating a Connectionqnqo}qp(h2hkh3hiubaubhY)qq}qr(h2X~The first step in accessing S3 is to create a connection to the service. There are two ways to do this in boto. The first is:qsh3hch4h5h6h]h8}qt(h<]h=]h;]h:]h>]uh@K hAhh-]quhUX~The first step in accessing S3 is to create a connection to the service. There are two ways to do this in boto. The first is:qvqw}qx(h2hsh3hqubaubcdocutils.nodes doctest_block qy)qz}q{(h2Xo>>> from boto.s3.connection import S3Connection >>> conn = S3Connection('', '')h3hch4h5h6U doctest_blockq|h8}q}(U xml:spaceq~Upreserveqh:]h;]h<]h=]h>]uh@KhAhh-]qhUXo>>> from boto.s3.connection import S3Connection >>> conn = S3Connection('', '')qq}q(h2Uh3hzubaubhY)q}q(h2XAt this point the variable conn will point to an S3Connection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables:qh3hch4h5h6h]h8}q(h<]h=]h;]h:]h>]uh@KhAhh-]qhUXAt this point the variable conn will point to an S3Connection object. In this example, the AWS access key and AWS secret key are passed in to the method explicitely. Alternatively, you can set the environment variables:qq}q(h2hh3hubaubcdocutils.nodes bullet_list q)q}q(h2Uh3hch4h5h6U bullet_listqh8}q(UbulletqX*h:]h;]h<]h=]h>]uh@KhAhh-]q(cdocutils.nodes list_item q)q}q(h2X,`AWS_ACCESS_KEY_ID` - Your AWS Access Key IDqh3hh4h5h6U list_itemqh8}q(h<]h=]h;]h:]h>]uh@NhAhh-]qhY)q}q(h2hh3hh4h5h6h]h8}q(h<]h=]h;]h:]h>]uh@Kh-]q(cdocutils.nodes title_reference q)q}q(h2X`AWS_ACCESS_KEY_ID`h8}q(h<]h=]h;]h:]h>]uh3hh-]qhUXAWS_ACCESS_KEY_IDqq}q(h2Uh3hubah6Utitle_referencequbhUX - Your AWS Access Key IDqq}q(h2X - Your AWS Access Key IDh3hubeubaubh)q}q(h2X5`AWS_SECRET_ACCESS_KEY` - Your AWS Secret Access Key h3hh4h5h6hh8}q(h<]h=]h;]h:]h>]uh@NhAhh-]qhY)q}q(h2X4`AWS_SECRET_ACCESS_KEY` - Your AWS Secret Access Keyh3hh4h5h6h]h8}q(h<]h=]h;]h:]h>]uh@Kh-]q(h)q}q(h2X`AWS_SECRET_ACCESS_KEY`h8}q(h<]h=]h;]h:]h>]uh3hh-]qhUXAWS_SECRET_ACCESS_KEYqq}q(h2Uh3hubah6hubhUX - Your AWS Secret Access Keyqq}q(h2X - Your AWS Secret Access Keyh3hubeubaubeubhY)q}q(h2X?and then call the constructor without any arguments, like this:qh3hch4h5h6h]h8}q(h<]h=]h;]h:]h>]uh@KhAhh-]qhUX?and then call the constructor without any arguments, like this:qq}q(h2hh3hubaubhy)q}q(h2X>>> conn = S3Connection()qh3hch4h5h6h|h8}q(h~hh:]h;]h<]h=]h>]uh@KhAhh-]qhUX>>> conn = S3Connection()qɅq}q(h2Uh3hubaubhY)q}q(h2XThere is also a shortcut function in the boto package, called connect_s3 that may provide a slightly easier means of creating a connection::h3hch4h5h6h]h8}q(h<]h=]h;]h:]h>]uh@KhAhh-]qhUXThere is also a shortcut function in the boto package, called connect_s3 that may provide a slightly easier means of creating a connection:qЅq}q(h2XThere is also a shortcut function in the boto package, called connect_s3 that may provide a slightly easier means of creating a connection:h3hubaubcdocutils.nodes literal_block q)q}q(h2X,>>> import boto >>> conn = boto.connect_s3()h3hch4h5h6U literal_blockqh8}q(h~hh:]h;]h<]h=]h>]uh@K!hAhh-]qhUX,>>> import boto >>> conn = boto.connect_s3()qمq}q(h2Uh3hubaubhY)q}q(h2XvIn either case, conn will point to an S3Connection object which we will use throughout the remainder of this tutorial.qh3hch4h5h6h]h8}q(h<]h=]h;]h:]h>]uh@K$hAhh-]qhUXvIn either case, conn will point to an S3Connection object which we will use throughout the remainder of this tutorial.qᅁq}q(h2hh3hubaubeubhB)q}q(h2Uh3hCh4h5h6hGh8}q(h<]h=]h;]h:]qh&ah>]qhauh@K(hAhh-]q(hN)q}q(h2XCreating a Bucketqh3hh4h5h6hRh8}q(h<]h=]h;]h:]h>]uh@K(hAhh-]qhUXCreating a Bucketqq}q(h2hh3hubaubhY)q}q(h2XOnce you have a connection established with S3, you will probably want to create a bucket. A bucket is a container used to store key/value pairs in S3. A bucket can hold an unlimited amount of data so you could potentially have just one bucket in S3 for all of your information. Or, you could create separate buckets for different types of data. You can figure all of that out later, first let's just create a bucket. That can be accomplished like this::h3hh4h5h6h]h8}q(h<]h=]h;]h:]h>]uh@K*hAhh-]qhUXOnce you have a connection established with S3, you will probably want to create a bucket. A bucket is a container used to store key/value pairs in S3. A bucket can hold an unlimited amount of data so you could potentially have just one bucket in S3 for all of your information. Or, you could create separate buckets for different types of data. You can figure all of that out later, first let's just create a bucket. That can be accomplished like this:qq}q(h2XOnce you have a connection established with S3, you will probably want to create a bucket. A bucket is a container used to store key/value pairs in S3. A bucket can hold an unlimited amount of data so you could potentially have just one bucket in S3 for all of your information. Or, you could create separate buckets for different types of data. You can figure all of that out later, first let's just create a bucket. That can be accomplished like this:h3hubaubh)q}q(h2X>>> bucket = conn.create_bucket('mybucket') Traceback (most recent call last): File "", line 1, in ? File "boto/connection.py", line 285, in create_bucket raise S3CreateError(response.status, response.reason) boto.exception.S3CreateError: S3Error[409]: Conflicth3hh4h5h6hh8}q(h~hh:]h;]h<]h=]h>]uh@K1hAhh-]qhUX>>> bucket = conn.create_bucket('mybucket') Traceback (most recent call last): File "", line 1, in ? File "boto/connection.py", line 285, in create_bucket raise S3CreateError(response.status, response.reason) boto.exception.S3CreateError: S3Error[409]: Conflictqq}q(h2Uh3hubaubhY)r}r(h2XVWhoa. What happened there? Well, the thing you have to know about buckets is that they are kind of like domain names. It's one flat name space that everyone who uses S3 shares. So, someone has already create a bucket called "mybucket" in S3 and that means no one else can grab that bucket name. So, you have to come up with a name that hasn't been taken yet. For example, something that uses a unique string as a prefix. Your AWS_ACCESS_KEY (NOT YOUR SECRET KEY!) could work but I'll leave it to your imagination to come up with something. I'll just assume that you found an acceptable name.rh3hh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@K8hAhh-]rhUXVWhoa. What happened there? Well, the thing you have to know about buckets is that they are kind of like domain names. It's one flat name space that everyone who uses S3 shares. So, someone has already create a bucket called "mybucket" in S3 and that means no one else can grab that bucket name. So, you have to come up with a name that hasn't been taken yet. For example, something that uses a unique string as a prefix. Your AWS_ACCESS_KEY (NOT YOUR SECRET KEY!) could work but I'll leave it to your imagination to come up with something. I'll just assume that you found an acceptable name.rr}r(h2jh3jubaubhY)r}r (h2XThe create_bucket method will create the requested bucket if it does not exist or will return the existing bucket if it does exist.r h3hh4h5h6h]h8}r (h<]h=]h;]h:]h>]uh@KBhAhh-]r hUXThe create_bucket method will create the requested bucket if it does not exist or will return the existing bucket if it does exist.r r}r(h2j h3jubaubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rh%ah>]rh auh@KFhAhh-]r(hN)r}r(h2X%Creating a Bucket In Another Locationrh3jh4h5h6hRh8}r(h<]h=]h;]h:]h>]uh@KFhAhh-]rhUX%Creating a Bucket In Another Locationrr}r(h2jh3jubaubhY)r}r(h2XThe example above assumes that you want to create a bucket in the standard US region. However, it is possible to create buckets in other locations. To do so, first import the Location object from the boto.s3.connection module, like this::h3jh4h5h6h]h8}r (h<]h=]h;]h:]h>]uh@KHhAhh-]r!hUXThe example above assumes that you want to create a bucket in the standard US region. However, it is possible to create buckets in other locations. To do so, first import the Location object from the boto.s3.connection module, like this:r"r#}r$(h2XThe example above assumes that you want to create a bucket in the standard US region. However, it is possible to create buckets in other locations. To do so, first import the Location object from the boto.s3.connection module, like this:h3jubaubh)r%}r&(h2X>>> from boto.s3.connection import Location >>> print '\n'.join(i for i in dir(Location) if i[0].isupper()) APNortheast APSoutheast APSoutheast2 DEFAULT EU SAEast USWest USWest2h3jh4h5h6hh8}r'(h~hh:]h;]h<]h=]h>]uh@KMhAhh-]r(hUX>>> from boto.s3.connection import Location >>> print '\n'.join(i for i in dir(Location) if i[0].isupper()) APNortheast APSoutheast APSoutheast2 DEFAULT EU SAEast USWest USWest2r)r*}r+(h2Uh3j%ubaubhY)r,}r-(h2XVAs you can see, the Location object defines a number of possible locations. By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location. For example::h3jh4h5h6h]h8}r.(h<]h=]h;]h:]h>]uh@KXhAhh-]r/hUXUAs you can see, the Location object defines a number of possible locations. By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location. For example:r0r1}r2(h2XUAs you can see, the Location object defines a number of possible locations. By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location. For example:h3j,ubaubh)r3}r4(h2X8>>> conn.create_bucket('mybucket', location=Location.EU)h3jh4h5h6hh8}r5(h~hh:]h;]h<]h=]h>]uh@K^hAhh-]r6hUX8>>> conn.create_bucket('mybucket', location=Location.EU)r7r8}r9(h2Uh3j3ubaubhY)r:}r;(h2XIwill create the bucket in the EU region (assuming the name is available).r<h3jh4h5h6h]h8}r=(h<]h=]h;]h:]h>]uh@K`hAhh-]r>hUXIwill create the bucket in the EU region (assuming the name is available).r?r@}rA(h2j<h3j:ubaubeubhB)rB}rC(h2Uh3hCh4h5h6hGh8}rD(h<]h=]h;]h:]rEh+ah>]rFhauh@KchAhh-]rG(hN)rH}rI(h2X Storing DatarJh3jBh4h5h6hRh8}rK(h<]h=]h;]h:]h>]uh@KchAhh-]rLhUX Storing DatarMrN}rO(h2jJh3jHubaubhY)rP}rQ(h2XOnce you have a bucket, presumably you will want to store some data in it. S3 doesn't care what kind of information you store in your objects or what format you use to store it. All you need is a key that is unique within your bucket.rRh3jBh4h5h6h]h8}rS(h<]h=]h;]h:]h>]uh@KehAhh-]rThUXOnce you have a bucket, presumably you will want to store some data in it. S3 doesn't care what kind of information you store in your objects or what format you use to store it. All you need is a key that is unique within your bucket.rUrV}rW(h2jRh3jPubaubhY)rX}rY(h2XThe Key object is used in boto to keep track of data stored in S3. To store new data in S3, start by creating a new Key object::h3jBh4h5h6h]h8}rZ(h<]h=]h;]h:]h>]uh@KjhAhh-]r[hUXThe Key object is used in boto to keep track of data stored in S3. To store new data in S3, start by creating a new Key object:r\r]}r^(h2XThe Key object is used in boto to keep track of data stored in S3. To store new data in S3, start by creating a new Key object:h3jXubaubh)r_}r`(h2X>>> from boto.s3.key import Key >>> k = Key(bucket) >>> k.key = 'foobar' >>> k.set_contents_from_string('This is a test of S3')h3jBh4h5h6hh8}ra(h~hh:]h;]h<]h=]h>]uh@KmhAhh-]rbhUX>>> from boto.s3.key import Key >>> k = Key(bucket) >>> k.key = 'foobar' >>> k.set_contents_from_string('This is a test of S3')rcrd}re(h2Uh3j_ubaubhY)rf}rg(h2XThe net effect of these statements is to create a new object in S3 with a key of "foobar" and a value of "This is a test of S3". To validate that this worked, quit out of the interpreter and start it up again. Then::h3jBh4h5h6h]h8}rh(h<]h=]h;]h:]h>]uh@KrhAhh-]rihUXThe net effect of these statements is to create a new object in S3 with a key of "foobar" and a value of "This is a test of S3". To validate that this worked, quit out of the interpreter and start it up again. Then:rjrk}rl(h2XThe net effect of these statements is to create a new object in S3 with a key of "foobar" and a value of "This is a test of S3". To validate that this worked, quit out of the interpreter and start it up again. Then:h3jfubaubh)rm}rn(h2X>>> import boto >>> c = boto.connect_s3() >>> b = c.get_bucket('mybucket') # substitute your bucket name here >>> from boto.s3.key import Key >>> k = Key(b) >>> k.key = 'foobar' >>> k.get_contents_as_string() 'This is a test of S3'h3jBh4h5h6hh8}ro(h~hh:]h;]h<]h=]h>]uh@KvhAhh-]rphUX>>> import boto >>> c = boto.connect_s3() >>> b = c.get_bucket('mybucket') # substitute your bucket name here >>> from boto.s3.key import Key >>> k = Key(b) >>> k.key = 'foobar' >>> k.get_contents_as_string() 'This is a test of S3'rqrr}rs(h2Uh3jmubaubhY)rt}ru(h2XSo, we can definitely store and retrieve strings. A more interesting example may be to store the contents of a local file in S3 and then retrieve the contents to another local file.rvh3jBh4h5h6h]h8}rw(h<]h=]h;]h:]h>]uh@KhAhh-]rxhUXSo, we can definitely store and retrieve strings. A more interesting example may be to store the contents of a local file in S3 and then retrieve the contents to another local file.ryrz}r{(h2jvh3jtubaubh)r|}r}(h2Xy>>> k = Key(b) >>> k.key = 'myfile' >>> k.set_contents_from_filename('foo.jpg') >>> k.get_contents_to_filename('bar.jpg')h3jBh4h5h6hh8}r~(h~hh:]h;]h<]h=]h>]uh@KhAhh-]rhUXy>>> k = Key(b) >>> k.key = 'myfile' >>> k.set_contents_from_filename('foo.jpg') >>> k.get_contents_to_filename('bar.jpg')rr}r(h2Uh3j|ubaubhY)r}r(h2XThere are a couple of things to note about this. When you send data to S3 from a file or filename, boto will attempt to determine the correct mime type for that file and send it as a Content-Type header. The boto package uses the standard mimetypes package in Python to do the mime type guessing. The other thing to note is that boto does stream the content to and from S3 so you should be able to send and receive large files without any problem.rh3jBh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@KhAhh-]rhUXThere are a couple of things to note about this. When you send data to S3 from a file or filename, boto will attempt to determine the correct mime type for that file and send it as a Content-Type header. The boto package uses the standard mimetypes package in Python to do the mime type guessing. The other thing to note is that boto does stream the content to and from S3 so you should be able to send and receive large files without any problem.rr}r(h2jh3jubaubhY)r}r(h2X1When fetching a key that has already exists, you have two options. If you're uncertain whether a key exists (or if you need the metadata set on it, you can call ``Bucket.get_key(key_name_here)``. However, if you're sure a key already exists within a bucket, you can skip the check for a key on the server.h3jBh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@KhAhh-]r(hUXWhen fetching a key that has already exists, you have two options. If you're uncertain whether a key exists (or if you need the metadata set on it, you can call rr}r(h2XWhen fetching a key that has already exists, you have two options. If you're uncertain whether a key exists (or if you need the metadata set on it, you can call h3jubcdocutils.nodes literal r)r}r(h2X!``Bucket.get_key(key_name_here)``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXBucket.get_key(key_name_here)rr}r(h2Uh3jubah6UliteralrubhUXo. However, if you're sure a key already exists within a bucket, you can skip the check for a key on the server.rr}r(h2Xo. However, if you're sure a key already exists within a bucket, you can skip the check for a key on the server.h3jubeubh)r}r(h2X2>>> import boto >>> c = boto.connect_s3() >>> b = c.get_bucket('mybucket') # substitute your bucket name here # Will hit the API to check if it exists. >>> possible_key = b.get_key('mykey') # substitute your key name here # Won't hit the API. >>> key_we_know_is_there = b.get_key('mykey', validate=False)h3jBh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@KhAhh-]rhUX2>>> import boto >>> c = boto.connect_s3() >>> b = c.get_bucket('mybucket') # substitute your bucket name here # Will hit the API to check if it exists. >>> possible_key = b.get_key('mykey') # substitute your key name here # Won't hit the API. >>> key_we_know_is_there = b.get_key('mykey', validate=False)rr}r(h2Uh3jubaubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rh#ah>]rh auh@KhAhh-]r(hN)r}r(h2XStoring Large Datarh3jh4h5h6hRh8}r(h<]h=]h;]h:]h>]uh@KhAhh-]rhUXStoring Large Datarr}r(h2jh3jubaubhY)r}r(h2XAt times the data you may want to store will be hundreds of megabytes or more in size. S3 allows you to split such files into smaller components. You upload each component in turn and then S3 combines them into the final object. While this is fairly straightforward, it requires a few extra steps to be taken. The example below makes use of the FileChunkIO module, so ``pip install FileChunkIO`` if it isn't already installed.h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@KhAhh-]r(hUXpAt times the data you may want to store will be hundreds of megabytes or more in size. S3 allows you to split such files into smaller components. You upload each component in turn and then S3 combines them into the final object. While this is fairly straightforward, it requires a few extra steps to be taken. The example below makes use of the FileChunkIO module, so rr}r(h2XpAt times the data you may want to store will be hundreds of megabytes or more in size. S3 allows you to split such files into smaller components. You upload each component in turn and then S3 combines them into the final object. While this is fairly straightforward, it requires a few extra steps to be taken. The example below makes use of the FileChunkIO module, so h3jubj)r}r(h2X``pip install FileChunkIO``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXpip install FileChunkIOrr}r(h2Uh3jubah6jubhUX if it isn't already installed.rr}r(h2X if it isn't already installed.h3jubeubh)r}r(h2X>>> import math, os >>> import boto >>> from filechunkio import FileChunkIO # Connect to S3 >>> c = boto.connect_s3() >>> b = c.get_bucket('mybucket') # Get file info >>> source_path = 'path/to/your/file.ext' >>> source_size = os.stat(source_path).st_size # Create a multipart upload request >>> mp = b.initiate_multipart_upload(os.path.basename(source_path)) # Use a chunk size of 50 MiB (feel free to change this) >>> chunk_size = 52428800 >>> chunk_count = int(math.ceil(source_size / chunk_size)) # Send the file parts, using FileChunkIO to create a file-like object # that points to a certain byte range within the original file. We # set bytes to never exceed the original file size. >>> for i in range(chunk_count + 1): >>> offset = chunk_size * i >>> bytes = min(chunk_size, source_size - offset) >>> with FileChunkIO(source_path, 'r', offset=offset, bytes=bytes) as fp: >>> mp.upload_part_from_file(fp, part_num=i + 1) # Finish the upload >>> mp.complete_upload()h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@KhAhh-]rhUX>>> import math, os >>> import boto >>> from filechunkio import FileChunkIO # Connect to S3 >>> c = boto.connect_s3() >>> b = c.get_bucket('mybucket') # Get file info >>> source_path = 'path/to/your/file.ext' >>> source_size = os.stat(source_path).st_size # Create a multipart upload request >>> mp = b.initiate_multipart_upload(os.path.basename(source_path)) # Use a chunk size of 50 MiB (feel free to change this) >>> chunk_size = 52428800 >>> chunk_count = int(math.ceil(source_size / chunk_size)) # Send the file parts, using FileChunkIO to create a file-like object # that points to a certain byte range within the original file. We # set bytes to never exceed the original file size. >>> for i in range(chunk_count + 1): >>> offset = chunk_size * i >>> bytes = min(chunk_size, source_size - offset) >>> with FileChunkIO(source_path, 'r', offset=offset, bytes=bytes) as fp: >>> mp.upload_part_from_file(fp, part_num=i + 1) # Finish the upload >>> mp.complete_upload()rr}r(h2Uh3jubaubhY)r}r(h2XIt is also possible to upload the parts in parallel using threads. The ``s3put`` script that ships with Boto provides an example of doing so using a thread pool.h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@KhAhh-]r(hUXGIt is also possible to upload the parts in parallel using threads. The rr}r(h2XGIt is also possible to upload the parts in parallel using threads. The h3jubj)r}r(h2X ``s3put``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXs3putrr}r(h2Uh3jubah6jubhUXQ script that ships with Boto provides an example of doing so using a thread pool.rr}r(h2XQ script that ships with Boto provides an example of doing so using a thread pool.h3jubeubhY)r}r(h2XNote that if you forget to call either ``mp.complete_upload()`` or ``mp.cancel_upload()`` you will be left with an incomplete upload and charged for the storage consumed by the uploaded parts. A call to ``bucket.get_all_multipart_uploads()`` can help to show lost multipart upload parts.h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@KhAhh-]r(hUX'Note that if you forget to call either rr}r(h2X'Note that if you forget to call either h3jubj)r}r(h2X``mp.complete_upload()``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXmp.complete_upload()rr}r(h2Uh3jubah6jubhUX or rr}r(h2X or h3jubj)r}r(h2X``mp.cancel_upload()``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXmp.cancel_upload()rr}r(h2Uh3jubah6jubhUXr you will be left with an incomplete upload and charged for the storage consumed by the uploaded parts. A call to rr}r(h2Xr you will be left with an incomplete upload and charged for the storage consumed by the uploaded parts. A call to h3jubj)r}r(h2X&``bucket.get_all_multipart_uploads()``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUX"bucket.get_all_multipart_uploads()rr}r(h2Uh3jubah6jubhUX. can help to show lost multipart upload parts.rr}r(h2X. can help to show lost multipart upload parts.h3jubeubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rhah>]rhauh@KhAhh-]r(hN)r}r(h2XAccessing A Bucketr h3jh4h5h6hRh8}r (h<]h=]h;]h:]h>]uh@KhAhh-]r hUXAccessing A Bucketr r }r(h2j h3jubaubhY)r}r(h2XLOnce a bucket exists, you can access it by getting the bucket. For example::rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@KhAhh-]rhUXKOnce a bucket exists, you can access it by getting the bucket. For example:rr}r(h2XKOnce a bucket exists, you can access it by getting the bucket. For example:h3jubaubh)r}r(h2X>>> mybucket = conn.get_bucket('mybucket') # Substitute in your bucket name >>> mybucket.list() ...listing of keys in the bucket...h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@KhAhh-]rhUX>>> mybucket = conn.get_bucket('mybucket') # Substitute in your bucket name >>> mybucket.list() ...listing of keys in the bucket...rr}r(h2Uh3jubaubhY)r}r(h2XBy default, this method tries to validate the bucket's existence. You can override this behavior by passing ``validate=False``.::h3jh4h5h6h]h8}r (h<]h=]h;]h:]h>]uh@KhAhh-]r!(hUXlBy default, this method tries to validate the bucket's existence. You can override this behavior by passing r"r#}r$(h2XlBy default, this method tries to validate the bucket's existence. You can override this behavior by passing h3jubj)r%}r&(h2X``validate=False``h8}r'(h<]h=]h;]h:]h>]uh3jh-]r(hUXvalidate=Falser)r*}r+(h2Uh3j%ubah6jubhUX.:r,r-}r.(h2X.:h3jubeubh)r/}r0(h2XH>>> nonexistent = conn.get_bucket('i-dont-exist-at-all', validate=False)h3jh4h5h6hh8}r1(h~hh:]h;]h<]h=]h>]uh@KhAhh-]r2hUXH>>> nonexistent = conn.get_bucket('i-dont-exist-at-all', validate=False)r3r4}r5(h2Uh3j/ubaubcsphinx.addnodes versionmodified r6)r7}r8(h2Uh3jh4h5h6Uversionmodifiedr9h8}r:(Uversionr;X2.25.0r<h:]h;]h<]h=]h>]Utyper=Xversionchangedr>uh@KhAhh-]r?hY)r@}rA(h2Uh3j7h4h5h6h]h8}rB(h<]h=]h;]h:]h>]uh@KhAhh-]rCcdocutils.nodes inline rD)rE}rF(h2Uh8}rG(h<]h=]rHj9ah;]h:]h>]uh3j@h-]rIhUXChanged in version 2.25.0.rJrK}rL(h2Uh3jEubah6UinlinerMubaubaubcdocutils.nodes warning rN)rO}rP(h2XIf ``validate=False`` is passed, no request is made to the service (no charge/communication delay). This is only safe to do if you are **sure** the bucket exists. If the default ``validate=True`` is passed, a request is made to the service to ensure the bucket exists. Prior to Boto v2.25.0, this fetched a list of keys (but with a max limit set to ``0``, always returning an empty list) in the bucket (& included better error messages), at an increased expense. As of Boto v2.25.0, this now performs a HEAD request (less expensive but worse error messages). If you were relying on parsing the error message before, you should call something like:: bucket = conn.get_bucket('', validate=False) bucket.get_all_keys(maxkeys=0)h3jh4h5h6UwarningrQh8}rR(h<]h=]h;]h:]h>]uh@NhAhh-]rS(hY)rT}rU(h2XIf ``validate=False`` is passed, no request is made to the service (no charge/communication delay). This is only safe to do if you are **sure** the bucket exists.h3jOh4h5h6h]h8}rV(h<]h=]h;]h:]h>]uh@Kh-]rW(hUXIf rXrY}rZ(h2XIf h3jTubj)r[}r\(h2X``validate=False``h8}r](h<]h=]h;]h:]h>]uh3jTh-]r^hUXvalidate=Falser_r`}ra(h2Uh3j[ubah6jubhUXr is passed, no request is made to the service (no charge/communication delay). This is only safe to do if you are rbrc}rd(h2Xr is passed, no request is made to the service (no charge/communication delay). This is only safe to do if you are h3jTubcdocutils.nodes strong re)rf}rg(h2X**sure**h8}rh(h<]h=]h;]h:]h>]uh3jTh-]rihUXsurerjrk}rl(h2Uh3jfubah6UstrongrmubhUX the bucket exists.rnro}rp(h2X the bucket exists.h3jTubeubhY)rq}rr(h2XIf the default ``validate=True`` is passed, a request is made to the service to ensure the bucket exists. Prior to Boto v2.25.0, this fetched a list of keys (but with a max limit set to ``0``, always returning an empty list) in the bucket (& included better error messages), at an increased expense. As of Boto v2.25.0, this now performs a HEAD request (less expensive but worse error messages).h3jOh4h5h6h]h8}rs(h<]h=]h;]h:]h>]uh@Kh-]rt(hUXIf the default rurv}rw(h2XIf the default h3jqubj)rx}ry(h2X``validate=True``h8}rz(h<]h=]h;]h:]h>]uh3jqh-]r{hUX validate=Truer|r}}r~(h2Uh3jxubah6jubhUX is passed, a request is made to the service to ensure the bucket exists. Prior to Boto v2.25.0, this fetched a list of keys (but with a max limit set to rr}r(h2X is passed, a request is made to the service to ensure the bucket exists. Prior to Boto v2.25.0, this fetched a list of keys (but with a max limit set to h3jqubj)r}r(h2X``0``h8}r(h<]h=]h;]h:]h>]uh3jqh-]rhUX0r}r(h2Uh3jubah6jubhUX, always returning an empty list) in the bucket (& included better error messages), at an increased expense. As of Boto v2.25.0, this now performs a HEAD request (less expensive but worse error messages).rr}r(h2X, always returning an empty list) in the bucket (& included better error messages), at an increased expense. As of Boto v2.25.0, this now performs a HEAD request (less expensive but worse error messages).h3jqubeubhY)r}r(h2XYIf you were relying on parsing the error message before, you should call something like::h3jOh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@Kh-]rhUXXIf you were relying on parsing the error message before, you should call something like:rr}r(h2XXIf you were relying on parsing the error message before, you should call something like:h3jubaubh)r}r(h2XXbucket = conn.get_bucket('', validate=False) bucket.get_all_keys(maxkeys=0)h3jOh6hh8}r(h~hh:]h;]h<]h=]h>]uh@Kh-]rhUXXbucket = conn.get_bucket('', validate=False) bucket.get_all_keys(maxkeys=0)rr}r(h2Uh3jubaubeubhY)r}r(h2XIf the bucket does not exist, a ``S3ResponseError`` will commonly be thrown. If you'd rather not deal with any exceptions, you can use the ``lookup`` method.::h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@KhAhh-]r(hUX If the bucket does not exist, a rr}r(h2X If the bucket does not exist, a h3jubj)r}r(h2X``S3ResponseError``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXS3ResponseErrorrr}r(h2Uh3jubah6jubhUXX will commonly be thrown. If you'd rather not deal with any exceptions, you can use the rr}r(h2XX will commonly be thrown. If you'd rather not deal with any exceptions, you can use the h3jubj)r}r(h2X ``lookup``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXlookuprr}r(h2Uh3jubah6jubhUX method.:rr}r(h2X method.:h3jubeubh)r}r(h2X>>> nonexistent = conn.lookup('i-dont-exist-at-all') >>> if nonexistent is None: ... print "No such bucket!" ... No such bucket!h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX>>> nonexistent = conn.lookup('i-dont-exist-at-all') >>> if nonexistent is None: ... print "No such bucket!" ... No such bucket!rr}r(h2Uh3jubaubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rh ah>]rhauh@MhAhh-]r(hN)r}r(h2XDeleting A Bucketrh3jh4h5h6hRh8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUXDeleting A Bucketrr}r(h2jh3jubaubhY)r}r(h2XORemoving a bucket can be done using the ``delete_bucket`` method. For example::rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@M hAhh-]r(hUX(Removing a bucket can be done using the rr}r(h2X(Removing a bucket can be done using the h3jubj)r}r(h2X``delete_bucket``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUX delete_bucketrr}r(h2Uh3jubah6jubhUX method. For example:rr}r(h2X method. For example:h3jubeubh)r}r(h2XC>>> conn.delete_bucket('mybucket') # Substitute in your bucket nameh3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@M hAhh-]rhUXC>>> conn.delete_bucket('mybucket') # Substitute in your bucket namerr}r(h2Uh3jubaubhY)r}r(h2XThe bucket must be empty of keys or this call will fail & an exception will be raised. You can remove a non-empty bucket by doing something like::h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUXThe bucket must be empty of keys or this call will fail & an exception will be raised. You can remove a non-empty bucket by doing something like:rr}r(h2XThe bucket must be empty of keys or this call will fail & an exception will be raised. You can remove a non-empty bucket by doing something like:h3jubaubh)r}r(h2X>>> full_bucket = conn.get_bucket('bucket-to-delete') # It's full of keys. Delete them all. >>> for key in full_bucket.list(): ... key.delete() ... # The bucket is empty now. Delete it. >>> conn.delete_bucket('bucket-to-delete')h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX>>> full_bucket = conn.get_bucket('bucket-to-delete') # It's full of keys. Delete them all. >>> for key in full_bucket.list(): ... key.delete() ... # The bucket is empty now. Delete it. >>> conn.delete_bucket('bucket-to-delete')rr}r(h2Uh3jubaubjN)r}r(h2XThis method can cause data loss! Be very careful when using it. Additionally, be aware that using the above method for removing all keys and deleting the bucket involves a request for each key. As such, it's not particularly fast & is very chatty.h3jh4h5h6jQh8}r(h<]h=]h;]h:]h>]uh@NhAhh-]r(hY)r}r(h2X?This method can cause data loss! Be very careful when using it.rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@Mh-]rhUX?This method can cause data loss! Be very careful when using it.rr}r(h2jh3jubaubhY)r}r(h2XAdditionally, be aware that using the above method for removing all keys and deleting the bucket involves a request for each key. As such, it's not particularly fast & is very chatty.rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@Mh-]rhUXAdditionally, be aware that using the above method for removing all keys and deleting the bucket involves a request for each key. As such, it's not particularly fast & is very chatty.rr}r(h2jh3jubaubeubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rh$ah>]rh auh@M"hAhh-]r (hN)r }r (h2XListing All Available Bucketsr h3jh4h5h6hRh8}r (h<]h=]h;]h:]h>]uh@M"hAhh-]rhUXListing All Available Bucketsrr}r(h2j h3j ubaubhY)r}r(h2XIn addition to accessing specific buckets via the create_bucket method you can also get a list of all available buckets that you have created.rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@M#hAhh-]rhUXIn addition to accessing specific buckets via the create_bucket method you can also get a list of all available buckets that you have created.rr}r(h2jh3jubaubh)r}r(h2X>>> rs = conn.get_all_buckets()h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@M(hAhh-]rhUX>>> rs = conn.get_all_buckets()rr}r (h2Uh3jubaubhY)r!}r"(h2XThis returns a ResultSet object (see the SQS Tutorial for more info on ResultSet objects). The ResultSet can be used as a sequence or list type object to retrieve Bucket objects.r#h3jh4h5h6h]h8}r$(h<]h=]h;]h:]h>]uh@M*hAhh-]r%hUXThis returns a ResultSet object (see the SQS Tutorial for more info on ResultSet objects). The ResultSet can be used as a sequence or list type object to retrieve Bucket objects.r&r'}r((h2j#h3j!ubaubh)r)}r*(h2Xa>>> len(rs) 11 >>> for b in rs: ... print b.name ... >>> b = rs[0]h3jh4h5h6hh8}r+(h~hh:]h;]h<]h=]h>]uh@M0hAhh-]r,hUXa>>> len(rs) 11 >>> for b in rs: ... print b.name ... >>> b = rs[0]r-r.}r/(h2Uh3j)ubaubeubhB)r0}r1(h2Uh3hCh4h5h6hGh8}r2(h<]h=]h;]h:]r3h(ah>]r4hauh@M9hAhh-]r5(hN)r6}r7(h2X>Setting / Getting the Access Control List for Buckets and Keysr8h3j0h4h5h6hRh8}r9(h<]h=]h;]h:]h>]uh@M9hAhh-]r:hUX>Setting / Getting the Access Control List for Buckets and Keysr;r<}r=(h2j8h3j6ubaubhY)r>}r?(h2XThe S3 service provides the ability to control access to buckets and keys within s3 via the Access Control List (ACL) associated with each object in S3. There are two ways to set the ACL for an object:r@h3j0h4h5h6h]h8}rA(h<]h=]h;]h:]h>]uh@M:hAhh-]rBhUXThe S3 service provides the ability to control access to buckets and keys within s3 via the Access Control List (ACL) associated with each object in S3. There are two ways to set the ACL for an object:rCrD}rE(h2j@h3j>ubaubcdocutils.nodes enumerated_list rF)rG}rH(h2Uh3j0h4h5h6Uenumerated_listrIh8}rJ(UsuffixrKU.h:]h;]h<]UprefixrLUh=]h>]UenumtyperMUarabicrNuh@M>hAhh-]rO(h)rP}rQ(h2XCreate a custom ACL that grants specific rights to specific users. At the moment, the users that are specified within grants have to be registered users of Amazon Web Services so this isn't as useful or as general as it could be. h3jGh4h5h6hh8}rR(h<]h=]h;]h:]h>]uh@NhAhh-]rShY)rT}rU(h2XCreate a custom ACL that grants specific rights to specific users. At the moment, the users that are specified within grants have to be registered users of Amazon Web Services so this isn't as useful or as general as it could be.rVh3jPh4h5h6h]h8}rW(h<]h=]h;]h:]h>]uh@M>h-]rXhUXCreate a custom ACL that grants specific rights to specific users. At the moment, the users that are specified within grants have to be registered users of Amazon Web Services so this isn't as useful or as general as it could be.rYrZ}r[(h2jVh3jTubaubaubh)r\}r](h2XUse a "canned" access control policy. There are four canned policies defined: a. private: Owner gets FULL_CONTROL. No one else has any access rights. b. public-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access. c. public-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access. d. authenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access. h3jGh4Nh6hh8}r^(h<]h=]h;]h:]h>]uh@NhAhh-]r_(hY)r`}ra(h2XNUse a "canned" access control policy. There are four canned policies defined:rbh3j\h4h5h6h]h8}rc(h<]h=]h;]h:]h>]uh@MCh-]rdhUXNUse a "canned" access control policy. There are four canned policies defined:rerf}rg(h2jbh3j`ubaubjF)rh}ri(h2Uh8}rj(jKU.h:]h;]h<]jLUh=]h>]jMU loweralpharkuh3j\h-]rl(h)rm}rn(h2XEprivate: Owner gets FULL_CONTROL. No one else has any access rights.roh8}rp(h<]h=]h;]h:]h>]uh3jhh-]rqhY)rr}rs(h2joh3jmh4h5h6h]h8}rt(h<]h=]h;]h:]h>]uh@MFh-]ruhUXEprivate: Owner gets FULL_CONTROL. No one else has any access rights.rvrw}rx(h2joh3jrubaubah6hubh)ry}rz(h2XYpublic-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access.r{h8}r|(h<]h=]h;]h:]h>]uh3jhh-]r}hY)r~}r(h2j{h3jyh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MGh-]rhUXYpublic-read: Owners gets FULL_CONTROL and the anonymous principal is granted READ access.rr}r(h2j{h3j~ubaubah6hubh)r}r(h2Xhpublic-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access.rh8}r(h<]h=]h;]h:]h>]uh3jhh-]rhY)r}r(h2jh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MHh-]rhUXhpublic-read-write: Owner gets FULL_CONTROL and the anonymous principal is granted READ and WRITE access.rr}r(h2jh3jubaubah6hubh)r}r(h2Xauthenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access. h8}r(h<]h=]h;]h:]h>]uh3jhh-]rhY)r}r(h2Xauthenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access.rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MIh-]rhUXauthenticated-read: Owner gets FULL_CONTROL and any principal authenticated as a registered Amazon S3 user is granted READ access.rr}r(h2jh3jubaubah6hubeh6jIubeubeubhY)r}r(h2XTo set a canned ACL for a bucket, use the set_acl method of the Bucket object. The argument passed to this method must be one of the four permissable canned policies named in the list CannedACLStrings contained in acl.py. For example, to make a bucket readable by anyone:rh3j0h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MKhAhh-]rhUXTo set a canned ACL for a bucket, use the set_acl method of the Bucket object. The argument passed to this method must be one of the four permissable canned policies named in the list CannedACLStrings contained in acl.py. For example, to make a bucket readable by anyone:rr}r(h2jh3jubaubhy)r}r(h2X>>> b.set_acl('public-read')rh3j0h4h5h6h|h8}r(h~hh:]h;]h<]h=]h>]uh@MPhAhh-]rhUX>>> b.set_acl('public-read')rr}r(h2Uh3jubaubhY)r}r(h2XgYou can also set the ACL for Key objects, either by passing an additional argument to the above method:rh3j0h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MRhAhh-]rhUXgYou can also set the ACL for Key objects, either by passing an additional argument to the above method:rr}r(h2jh3jubaubhy)r}r(h2X&>>> b.set_acl('public-read', 'foobar')rh3j0h4h5h6h|h8}r(h~hh:]h;]h<]h=]h>]uh@MUhAhh-]rhUX&>>> b.set_acl('public-read', 'foobar')rr}r(h2Uh3jubaubhY)r}r(h2Xrwhere 'foobar' is the key of some object within the bucket b or you can call the set_acl method of the Key object:rh3j0h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MWhAhh-]rhUXrwhere 'foobar' is the key of some object within the bucket b or you can call the set_acl method of the Key object:rr}r(h2jh3jubaubhy)r}r(h2X>>> k.set_acl('public-read')rh3j0h4h5h6h|h8}r(h~hh:]h;]h<]h=]h>]uh@MZhAhh-]rhUX>>> k.set_acl('public-read')rr}r(h2Uh3jubaubhY)r}r(h2XYou can also retrieve the current ACL for a Bucket or Key object using the get_acl object. This method parses the AccessControlPolicy response sent by S3 and creates a set of Python objects that represent the ACL.rh3j0h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@M\hAhh-]rhUXYou can also retrieve the current ACL for a Bucket or Key object using the get_acl object. This method parses the AccessControlPolicy response sent by S3 and creates a set of Python objects that represent the ACL.rr}r(h2jh3jubaubh)r}r(h2XW>>> acp = b.get_acl() >>> acp >>> acp.acl >>> acp.acl.grants [] >>> for grant in acp.acl.grants: ... print grant.permission, grant.display_name, grant.email_address, grant.id ... FULL_CONTROL h3j0h4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MbhAhh-]rhUXW>>> acp = b.get_acl() >>> acp >>> acp.acl >>> acp.acl.grants [] >>> for grant in acp.acl.grants: ... print grant.permission, grant.display_name, grant.email_address, grant.id ... FULL_CONTROL rr}r(h2Uh3jubaubhY)r}r(h2XRThe Python objects representing the ACL can be found in the acl.py module of boto.rh3j0h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MnhAhh-]rhUXRThe Python objects representing the ACL can be found in the acl.py module of boto.rr}r(h2jh3jubaubhY)r}r(h2XBoth the Bucket object and the Key object also provide shortcut methods to simplify the process of granting individuals specific access. For example, if you want to grant an individual user READ access to a particular object in S3 you could do the following::h3j0h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MqhAhh-]rhUXBoth the Bucket object and the Key object also provide shortcut methods to simplify the process of granting individuals specific access. For example, if you want to grant an individual user READ access to a particular object in S3 you could do the following:rr}r(h2XBoth the Bucket object and the Key object also provide shortcut methods to simplify the process of granting individuals specific access. For example, if you want to grant an individual user READ access to a particular object in S3 you could do the following:h3jubaubh)r}r(h2XQ>>> key = b.lookup('mykeytoshare') >>> key.add_email_grant('READ', 'foo@bar.com')h3j0h4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MvhAhh-]rhUXQ>>> key = b.lookup('mykeytoshare') >>> key.add_email_grant('READ', 'foo@bar.com')rr}r(h2Uh3jubaubhY)r}r(h2XThe email address provided should be the one associated with the users AWS account. There is a similar method called add_user_grant that accepts the canonical id of the user rather than the email address.rh3j0h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MyhAhh-]rhUXThe email address provided should be the one associated with the users AWS account. There is a similar method called add_user_grant that accepts the canonical id of the user rather than the email address.rr}r(h2jh3jubaubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rh)ah>]rhauh@M~hAhh-]r(hN)r}r(h2X.Setting/Getting Metadata Values on Key Objectsrh3jh4h5h6hRh8}r(h<]h=]h;]h:]h>]uh@M~hAhh-]rhUX.Setting/Getting Metadata Values on Key Objectsrr}r(h2jh3jubaubhY)r}r (h2XS3 allows arbitrary user metadata to be assigned to objects within a bucket. To take advantage of this S3 feature, you should use the set_metadata and get_metadata methods of the Key object to set and retrieve metadata associated with an S3 object. For example::h3jh4h5h6h]h8}r (h<]h=]h;]h:]h>]uh@MhAhh-]r hUXS3 allows arbitrary user metadata to be assigned to objects within a bucket. To take advantage of this S3 feature, you should use the set_metadata and get_metadata methods of the Key object to set and retrieve metadata associated with an S3 object. For example:r r }r(h2XS3 allows arbitrary user metadata to be assigned to objects within a bucket. To take advantage of this S3 feature, you should use the set_metadata and get_metadata methods of the Key object to set and retrieve metadata associated with an S3 object. For example:h3jubaubh)r}r(h2X>>> k = Key(b) >>> k.key = 'has_metadata' >>> k.set_metadata('meta1', 'This is the first metadata value') >>> k.set_metadata('meta2', 'This is the second metadata value') >>> k.set_contents_from_filename('foo.txt')h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX>>> k = Key(b) >>> k.key = 'has_metadata' >>> k.set_metadata('meta1', 'This is the first metadata value') >>> k.set_metadata('meta2', 'This is the second metadata value') >>> k.set_contents_from_filename('foo.txt')rr}r(h2Uh3jubaubhY)r}r(h2XcThis code associates two metadata key/value pairs with the Key k. To retrieve those values later::h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUXbThis code associates two metadata key/value pairs with the Key k. To retrieve those values later:rr}r(h2XbThis code associates two metadata key/value pairs with the Key k. To retrieve those values later:h3jubaubh)r}r(h2X>>> k = b.get_key('has_metadata') >>> k.get_metadata('meta1') 'This is the first metadata value' >>> k.get_metadata('meta2') 'This is the second metadata value' >>>h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]r hUX>>> k = b.get_key('has_metadata') >>> k.get_metadata('meta1') 'This is the first metadata value' >>> k.get_metadata('meta2') 'This is the second metadata value' >>>r!r"}r#(h2Uh3jubaubeubhB)r$}r%(h2Uh3hCh4h5h6hGh8}r&(h<]h=]h;]h:]r'h'ah>]r(hauh@MhAhh-]r)(hN)r*}r+(h2X7Setting/Getting/Deleting CORS Configuration on a Bucketr,h3j$h4h5h6hRh8}r-(h<]h=]h;]h:]h>]uh@MhAhh-]r.hUX7Setting/Getting/Deleting CORS Configuration on a Bucketr/r0}r1(h2j,h3j*ubaubhY)r2}r3(h2XDCross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.r4h3j$h4h5h6h]h8}r5(h<]h=]h;]h:]h>]uh@MhAhh-]r6hUXDCross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.r7r8}r9(h2j4h3j2ubaubhY)r:}r;(h2X?To create a CORS configuration and associate it with a bucket::r<h3j$h4h5h6h]h8}r=(h<]h=]h;]h:]h>]uh@MhAhh-]r>hUX>To create a CORS configuration and associate it with a bucket:r?r@}rA(h2X>To create a CORS configuration and associate it with a bucket:h3j:ubaubh)rB}rC(h2X>>> from boto.s3.cors import CORSConfiguration >>> cors_cfg = CORSConfiguration() >>> cors_cfg.add_rule(['PUT', 'POST', 'DELETE'], 'https://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption') >>> cors_cfg.add_rule('GET', '*')h3j$h4h5h6hh8}rD(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rEhUX>>> from boto.s3.cors import CORSConfiguration >>> cors_cfg = CORSConfiguration() >>> cors_cfg.add_rule(['PUT', 'POST', 'DELETE'], 'https://www.example.com', allowed_header='*', max_age_seconds=3000, expose_header='x-amz-server-side-encryption') >>> cors_cfg.add_rule('GET', '*')rFrG}rH(h2Uh3jBubaubhY)rI}rJ(h2XBThe above code creates a CORS configuration object with two rules.rKh3j$h4h5h6h]h8}rL(h<]h=]h;]h:]h>]uh@MhAhh-]rMhUXBThe above code creates a CORS configuration object with two rules.rNrO}rP(h2jKh3jIubaubh)rQ}rR(h2Uh3j$h4h5h6hh8}rS(hX*h:]h;]h<]h=]h>]uh@MhAhh-]rT(h)rU}rV(h2X9The first rule allows cross-origin PUT, POST, and DELETE requests from the https://www.example.com/ origin. The rule also allows all headers in preflight OPTIONS request through the Access-Control-Request-Headers header. In response to any preflight OPTIONS request, Amazon S3 will return any requested headers.h3jQh4h5h6hh8}rW(h<]h=]h;]h:]h>]uh@NhAhh-]rXhY)rY}rZ(h2X9The first rule allows cross-origin PUT, POST, and DELETE requests from the https://www.example.com/ origin. The rule also allows all headers in preflight OPTIONS request through the Access-Control-Request-Headers header. In response to any preflight OPTIONS request, Amazon S3 will return any requested headers.h3jUh4h5h6h]h8}r[(h<]h=]h;]h:]h>]uh@Mh-]r\(hUXKThe first rule allows cross-origin PUT, POST, and DELETE requests from the r]r^}r_(h2XKThe first rule allows cross-origin PUT, POST, and DELETE requests from the h3jYubcdocutils.nodes reference r`)ra}rb(h2Xhttps://www.example.com/rch8}rd(Urefurijch:]h;]h<]h=]h>]uh3jYh-]rehUXhttps://www.example.com/rfrg}rh(h2Uh3jaubah6U referenceriubhUX origin. The rule also allows all headers in preflight OPTIONS request through the Access-Control-Request-Headers header. In response to any preflight OPTIONS request, Amazon S3 will return any requested headers.rjrk}rl(h2X origin. The rule also allows all headers in preflight OPTIONS request through the Access-Control-Request-Headers header. In response to any preflight OPTIONS request, Amazon S3 will return any requested headers.h3jYubeubaubh)rm}rn(h2XCThe second rule allows cross-origin GET requests from all origins. h3jQh4h5h6hh8}ro(h<]h=]h;]h:]h>]uh@NhAhh-]rphY)rq}rr(h2XBThe second rule allows cross-origin GET requests from all origins.rsh3jmh4h5h6h]h8}rt(h<]h=]h;]h:]h>]uh@Mh-]ruhUXBThe second rule allows cross-origin GET requests from all origins.rvrw}rx(h2jsh3jqubaubaubeubhY)ry}rz(h2X/To associate this configuration with a bucket::r{h3j$h4h5h6h]h8}r|(h<]h=]h;]h:]h>]uh@MhAhh-]r}hUX.To associate this configuration with a bucket:r~r}r(h2X.To associate this configuration with a bucket:h3jyubaubh)r}r(h2Xi>>> import boto >>> c = boto.connect_s3() >>> bucket = c.lookup('mybucket') >>> bucket.set_cors(cors_cfg)h3j$h4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUXi>>> import boto >>> c = boto.connect_s3() >>> bucket = c.lookup('mybucket') >>> bucket.set_cors(cors_cfg)rr}r(h2Uh3jubaubhY)r}r(h2X=To retrieve the CORS configuration associated with a bucket::rh3j$h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUX<To retrieve the CORS configuration associated with a bucket:rr}r(h2X<To retrieve the CORS configuration associated with a bucket:h3jubaubh)r}r(h2X >>> cors_cfg = bucket.get_cors()h3j$h4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX >>> cors_cfg = bucket.get_cors()rr}r(h2Uh3jubaubhY)r}r(h2X?And, finally, to delete all CORS configurations from a bucket::rh3j$h4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUX>And, finally, to delete all CORS configurations from a bucket:rr}r(h2X>And, finally, to delete all CORS configurations from a bucket:h3jubaubh)r}r(h2X>>> bucket.delete_cors()h3j$h4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX>>> bucket.delete_cors()rr}r(h2Uh3jubaubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rh,ah>]rhauh@MhAhh-]r(hN)r}r(h2X Transitioning Objects to Glacierrh3jh4h5h6hRh8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUX Transitioning Objects to Glacierrr}r(h2jh3jubaubhY)r}r(h2X&You can configure objects in S3 to transition to Glacier after a period of time. This is done using lifecycle policies. A lifecycle policy can also specify that an object should be deleted after a period of time. Lifecycle configurations are assigned to buckets and require these parameters:rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUX&You can configure objects in S3 to transition to Glacier after a period of time. This is done using lifecycle policies. A lifecycle policy can also specify that an object should be deleted after a period of time. Lifecycle configurations are assigned to buckets and require these parameters:rr}r(h2jh3jubaubh)r}r(h2Uh3jh4h5h6hh8}r(hX*h:]h;]h<]h=]h>]uh@MhAhh-]r(h)r}r(h2X@The object prefix that identifies the objects you are targeting.rh3jh4h5h6hh8}r(h<]h=]h;]h:]h>]uh@NhAhh-]rhY)r}r(h2jh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@Mh-]rhUX@The object prefix that identifies the objects you are targeting.rr}r(h2jh3jubaubaubh)r}r(h2X<The action you want S3 to perform on the identified objects.rh3jh4h5h6hh8}r(h<]h=]h;]h:]h>]uh@NhAhh-]rhY)r}r(h2jh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@Mh-]rhUX<The action you want S3 to perform on the identified objects.rr}r(h2jh3jubaubaubh)r}r(h2XEThe date (or time period) when you want S3 to perform these actions. h3jh4h5h6hh8}r(h<]h=]h;]h:]h>]uh@NhAhh-]rhY)r}r(h2XDThe date (or time period) when you want S3 to perform these actions.rh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@Mh-]rhUXDThe date (or time period) when you want S3 to perform these actions.rr}r(h2jh3jubaubaubeubhY)r}r(h2XXFor example, given a bucket ``s3-glacier-boto-demo``, we can first retrieve the bucket::h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]r(hUXFor example, given a bucket rr}r(h2XFor example, given a bucket h3jubj)r}r(h2X``s3-glacier-boto-demo``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXs3-glacier-boto-demorr}r(h2Uh3jubah6jubhUX#, we can first retrieve the bucket:rr}r(h2X#, we can first retrieve the bucket:h3jubeubh)r}r(h2X[>>> import boto >>> c = boto.connect_s3() >>> bucket = c.get_bucket('s3-glacier-boto-demo')h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX[>>> import boto >>> c = boto.connect_s3() >>> bucket = c.get_bucket('s3-glacier-boto-demo')rr}r(h2Uh3jubaubhY)r}r(h2XThen we can create a lifecycle object. In our example, we want all objects under ``logs/*`` to transition to Glacier 30 days after the object is created.h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]r(hUXRThen we can create a lifecycle object. In our example, we want all objects under rr}r(h2XRThen we can create a lifecycle object. In our example, we want all objects under h3jubj)r}r(h2X ``logs/*``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXlogs/*rr}r (h2Uh3jubah6jubhUX> to transition to Glacier 30 days after the object is created.r r }r (h2X> to transition to Glacier 30 days after the object is created.h3jubeubh)r }r(h2X>>> from boto.s3.lifecycle import Lifecycle, Transition, Rule >>> to_glacier = Transition(days=30, storage_class='GLACIER') >>> rule = Rule('ruleid', 'logs/', 'Enabled', transition=to_glacier) >>> lifecycle = Lifecycle() >>> lifecycle.append(rule)h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX>>> from boto.s3.lifecycle import Lifecycle, Transition, Rule >>> to_glacier = Transition(days=30, storage_class='GLACIER') >>> rule = Rule('ruleid', 'logs/', 'Enabled', transition=to_glacier) >>> lifecycle = Lifecycle() >>> lifecycle.append(rule)rr}r(h2Uh3j ubaubcdocutils.nodes note r)r}r(h2XGFor API docs for the lifecycle objects, see :py:mod:`boto.s3.lifecycle`rh3jh4h5h6Unoterh8}r(h<]h=]h;]h:]h>]uh@NhAhh-]rhY)r}r(h2jh3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@Mh-]r(hUX,For API docs for the lifecycle objects, see rr }r!(h2X,For API docs for the lifecycle objects, see h3jubcsphinx.addnodes pending_xref r")r#}r$(h2X:py:mod:`boto.s3.lifecycle`r%h3jh4h5h6U pending_xrefr&h8}r'(UreftypeXmodUrefwarnr(U reftargetr)Xboto.s3.lifecycleU refdomainXpyr*h:]h;]U refexplicith<]h=]h>]Urefdocr+Xs3_tutr,Upy:classr-NU py:moduler.Nuh@Mh-]r/j)r0}r1(h2j%h8}r2(h<]h=]r3(Uxrefr4j*Xpy-modr5eh;]h:]h>]uh3j#h-]r6hUXboto.s3.lifecycler7r8}r9(h2Uh3j0ubah6jubaubeubaubhY)r:}r;(h2X<We can now configure the bucket with this lifecycle policy::r<h3jh4h5h6h]h8}r=(h<]h=]h;]h:]h>]uh@MhAhh-]r>hUX;We can now configure the bucket with this lifecycle policy:r?r@}rA(h2X;We can now configure the bucket with this lifecycle policy:h3j:ubaubh)rB}rC(h2X.>>> bucket.configure_lifecycle(lifecycle) Trueh3jh4h5h6hh8}rD(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rEhUX.>>> bucket.configure_lifecycle(lifecycle) TruerFrG}rH(h2Uh3jBubaubhY)rI}rJ(h2XCYou can also retrieve the current lifecycle policy for the bucket::rKh3jh4h5h6h]h8}rL(h<]h=]h;]h:]h>]uh@MhAhh-]rMhUXBYou can also retrieve the current lifecycle policy for the bucket:rNrO}rP(h2XBYou can also retrieve the current lifecycle policy for the bucket:h3jIubaubh)rQ}rR(h2Xn>>> current = bucket.get_lifecycle_config() >>> print current[0].transition h3jh4h5h6hh8}rS(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rThUXn>>> current = bucket.get_lifecycle_config() >>> print current[0].transition rUrV}rW(h2Uh3jQubaubhY)rX}rY(h2XWhen an object transitions to Glacier, the storage class will be updated. This can be seen when you **list** the objects in a bucket::h3jh4h5h6h]h8}rZ(h<]h=]h;]h:]h>]uh@MhAhh-]r[(hUXeWhen an object transitions to Glacier, the storage class will be updated. This can be seen when you r\r]}r^(h2XeWhen an object transitions to Glacier, the storage class will be updated. This can be seen when you h3jXubje)r_}r`(h2X**list**h8}ra(h<]h=]h;]h:]h>]uh3jXh-]rbhUXlistrcrd}re(h2Uh3j_ubah6jmubhUX the objects in a bucket:rfrg}rh(h2X the objects in a bucket:h3jXubeubh)ri}rj(h2Xz>>> for key in bucket.list(): ... print key, key.storage_class ... GLACIERh3jh4h5h6hh8}rk(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rlhUXz>>> for key in bucket.list(): ... print key, key.storage_class ... GLACIERrmrn}ro(h2Uh3jiubaubhY)rp}rq(h2XDYou can also use the prefix argument to the ``bucket.list`` method::rrh3jh4h5h6h]h8}rs(h<]h=]h;]h:]h>]uh@MhAhh-]rt(hUX,You can also use the prefix argument to the rurv}rw(h2X,You can also use the prefix argument to the h3jpubj)rx}ry(h2X``bucket.list``h8}rz(h<]h=]h;]h:]h>]uh3jph-]r{hUX bucket.listr|r}}r~(h2Uh3jxubah6jubhUX method:rr}r(h2X method:h3jpubeubh)r}r(h2XN>>> print list(b.list(prefix='logs/testlog1.log'))[0].storage_class u'GLACIER'h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUXN>>> print list(b.list(prefix='logs/testlog1.log'))[0].storage_class u'GLACIER'rr}r(h2Uh3jubaubeubhB)r}r(h2Uh3hCh4h5h6hGh8}r(h<]h=]h;]h:]rh*ah>]rhauh@MhAhh-]r(hN)r}r(h2XRestoring Objects from Glacierrh3jh4h5h6hRh8}r(h<]h=]h;]h:]h>]uh@MhAhh-]rhUXRestoring Objects from Glacierrr}r(h2jh3jubaubhY)r}r(h2XOnce an object has been transitioned to Glacier, you can restore the object back to S3. To do so, you can use the :py:meth:`boto.s3.key.Key.restore` method of the key object. The ``restore`` method takes an integer that specifies the number of days to keep the object in S3.h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]r(hUXsOnce an object has been transitioned to Glacier, you can restore the object back to S3. To do so, you can use the rr}r(h2XsOnce an object has been transitioned to Glacier, you can restore the object back to S3. To do so, you can use the h3jubj")r}r(h2X":py:meth:`boto.s3.key.Key.restore`rh3jh4h5h6j&h8}r(UreftypeXmethj(j)Xboto.s3.key.Key.restoreU refdomainXpyrh:]h;]U refexplicith<]h=]h>]j+j,j-Nj.Nuh@Mh-]rj)r}r(h2jh8}r(h<]h=]r(j4jXpy-methreh;]h:]h>]uh3jh-]rhUXboto.s3.key.Key.restore()rr}r(h2Uh3jubah6jubaubhUX method of the key object. The rr}r(h2X method of the key object. The h3jubj)r}r(h2X ``restore``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXrestorerr}r(h2Uh3jubah6jubhUXT method takes an integer that specifies the number of days to keep the object in S3.rr}r(h2XT method takes an integer that specifies the number of days to keep the object in S3.h3jubeubh)r}r(h2X>>> import boto >>> c = boto.connect_s3() >>> bucket = c.get_bucket('s3-glacier-boto-demo') >>> key = bucket.get_key('logs/testlog1.log') >>> key.restore(days=5)h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX>>> import boto >>> c = boto.connect_s3() >>> bucket = c.get_bucket('s3-glacier-boto-demo') >>> key = bucket.get_key('logs/testlog1.log') >>> key.restore(days=5)rr}r(h2Uh3jubaubhY)r}r(h2XIt takes about 4 hours for a restore operation to make a copy of the archive available for you to access. While the object is being restored, the ``ongoing_restore`` attribute will be set to ``True``::h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]r(hUXIt takes about 4 hours for a restore operation to make a copy of the archive available for you to access. While the object is being restored, the rr}r(h2XIt takes about 4 hours for a restore operation to make a copy of the archive available for you to access. While the object is being restored, the h3jubj)r}r(h2X``ongoing_restore``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXongoing_restorerr}r(h2Uh3jubah6jubhUX attribute will be set to rr}r(h2X attribute will be set to h3jubj)r}r(h2X``True``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXTruerr}r(h2Uh3jubah6jubhUX:r}r(h2X:h3jubeubh)r}r(h2XP>>> key = bucket.get_key('logs/testlog1.log') >>> print key.ongoing_restore Trueh3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@M hAhh-]rhUXP>>> key = bucket.get_key('logs/testlog1.log') >>> print key.ongoing_restore Truerr}r(h2Uh3jubaubhY)r}r(h2XsWhen the restore is finished, this value will be ``False`` and the expiry date of the object will be non ``None``::h3jh4h5h6h]h8}r(h<]h=]h;]h:]h>]uh@MhAhh-]r(hUX1When the restore is finished, this value will be rr}r(h2X1When the restore is finished, this value will be h3jubj)r}r(h2X ``False``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXFalserr}r(h2Uh3jubah6jubhUX/ and the expiry date of the object will be non rr}r(h2X/ and the expiry date of the object will be non h3jubj)r}r(h2X``None``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXNonerr}r(h2Uh3jubah6jubhUX:r}r(h2X:h3jubeubh)r}r(h2X>>> key = bucket.get_key('logs/testlog1.log') >>> print key.ongoing_restore False >>> print key.expiry_date "Fri, 21 Dec 2012 00:00:00 GMT"h3jh4h5h6hh8}r(h~hh:]h;]h<]h=]h>]uh@MhAhh-]rhUX>>> key = bucket.get_key('logs/testlog1.log') >>> print key.ongoing_restore False >>> print key.expiry_date "Fri, 21 Dec 2012 00:00:00 GMT"rr}r(h2Uh3jubaubj)r}r(h2XuIf there is no restore operation either in progress or completed, the ``ongoing_restore`` attribute will be ``None``.h3jh4h5h6jh8}r(h<]h=]h;]h:]h>]uh@NhAhh-]rhY)r}r(h2XuIf there is no restore operation either in progress or completed, the ``ongoing_restore`` attribute will be ``None``.h3jh4h5h6h]h8}r (h<]h=]h;]h:]h>]uh@Mh-]r (hUXFIf there is no restore operation either in progress or completed, the r r }r (h2XFIf there is no restore operation either in progress or completed, the h3jubj)r}r(h2X``ongoing_restore``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXongoing_restorerr}r(h2Uh3jubah6jubhUX attribute will be rr}r(h2X attribute will be h3jubj)r}r(h2X``None``h8}r(h<]h=]h;]h:]h>]uh3jh-]rhUXNonerr}r(h2Uh3jubah6jubhUX.r}r (h2X.h3jubeubaubhY)r!}r"(h2X@Once the object is restored you can then download the contents::r#h3jh4h5h6h]h8}r$(h<]h=]h;]h:]h>]uh@MhAhh-]r%hUX?Once the object is restored you can then download the contents:r&r'}r((h2X?Once the object is restored you can then download the contents:h3j!ubaubh)r)}r*(h2X0>>> key.get_contents_to_filename('testlog1.log')h3jh4h5h6hh8}r+(h~hh:]h;]h<]h=]h>]uh@M hAhh-]r,hUX0>>> key.get_contents_to_filename('testlog1.log')r-r.}r/(h2Uh3j)ubaubeubeubeh2UU transformerr0NU footnote_refsr1}r2Urefnamesr3}r4Usymbol_footnotesr5]r6Uautofootnote_refsr7]r8Usymbol_footnote_refsr9]r:U citationsr;]r<hAhU current_liner=NUtransform_messagesr>]r?cdocutils.nodes system_message r@)rA}rB(h2Uh8}rC(h<]UlevelKh:]h;]Usourceh5h=]h>]UlineKUtypeUINFOrDuh-]rEhY)rF}rG(h2Uh8}rH(h<]h=]h;]h:]h>]uh3jAh-]rIhUX,Hyperlink target "s3-tut" is not referenced.rJrK}rL(h2Uh3jFubah6h]ubah6Usystem_messagerMubaUreporterrNNUid_startrOKU autofootnotesrP]rQU citation_refsrR}rSUindirect_targetsrT]rUUsettingsrV(cdocutils.frontend Values rWorX}rY(Ufootnote_backlinksrZKUrecord_dependenciesr[NU rfc_base_urlr\Uhttp://tools.ietf.org/html/r]U tracebackr^Upep_referencesr_NUstrip_commentsr`NU toc_backlinksraUentryrbU language_codercUenrdU datestampreNU report_levelrfKU _destinationrgNU halt_levelrhKU strip_classesriNhRNUerror_encoding_error_handlerrjUbackslashreplacerkUdebugrlNUembed_stylesheetrmUoutput_encoding_error_handlerrnUstrictroU sectnum_xformrpKUdump_transformsrqNU docinfo_xformrrKUwarning_streamrsNUpep_file_url_templatertUpep-%04druUexit_status_levelrvKUconfigrwNUstrict_visitorrxNUcloak_email_addressesryUtrim_footnote_reference_spacerzUenvr{NUdump_pseudo_xmlr|NUexpose_internalsr}NUsectsubtitle_xformr~U source_linkrNUrfc_referencesrNUoutput_encodingrUutf-8rU source_urlrNUinput_encodingrU utf-8-sigrU_disable_configrNU id_prefixrUU tab_widthrKUerror_encodingrUUTF-8rU_sourcerh5Ugettext_compactrU generatorrNUdump_internalsrNU smart_quotesrU pep_base_urlrUhttp://www.python.org/dev/peps/rUsyntax_highlightrUlongrUinput_encoding_error_handlerrjoUauto_id_prefixrUidrUdoctitle_xformrUstrip_elements_with_classesrNU _config_filesr]Ufile_insertion_enabledrU raw_enabledrKU dump_settingsrNubUsymbol_footnote_startrKUidsr}r(h#jhhCh'j$h&hhjh*jh%jh+jBh(j0h jh,jh$jh"hch!hCh)juUsubstitution_namesr}rh6hAh8}r(h<]h:]h;]Usourceh5h=]h>]uU footnotesr]rUrefidsr}rh]rh0asub.