Amazon S3 Technical FAQs

Articles & Tutorials>Amazon S3 Technical FAQs
A collection of those issues that keep coming up over time. This is a must read for new Amazon S3 users and a great refresher for those developers already using the service.

Details

Submitted By: Justin@AWS
AWS Products Used: Amazon S3
Created On: December 21, 2007 10:12 PM GMT
Last Updated: March 25, 2008 8:10 PM GMT

Notes for Newbies

  1. How do I start using Amazon S3?
  2. Why can't I create a bucket named "images"?
  3. I only see 1000 objects in my bucket, what gives?
  4. Why do some of my requests randomly fail with a 403 Forbidden?
  5. How do I rename, copy, or move a bucket?
  6. Can I give anyone access to my bucket?
  7. What characters are allowed in a bucket or object name?
  8. How do I mimic a typical file system on Amazon S3?
  9. I need more than 100 buckets!
  10. How do I PUT an object larger than 1 MB via SOAP?

Other Questions

  1. Why do trace routes and pings to s3.amazonaws.com fail?
  2. Am I charged for unsuccessful requests to Amazon S3?
  3. How can I cap my Amazon S3 usage?
  4. Is Transfer-Encoding: chunked supported by Amazon S3?
  5. My request failed with 403 Forbidden / 400 Bad request / 409 Conflict ! What's wrong?

Reporting an Issue

  1. My requests are failing, what information do you need to help me?


How do I start using Amazon S3?

If you are a developer:

  1. Technical Documentation
  2. Code Samples
  3. Developer Forums (after you finish reading this page :)

If you have no development experience and just want a storage solution:


Why can't I create a bucket named "images"?

The bucket namespace is global - just like domain names. You can't create a bucket with a name that is already used by another Amazon S3 user. In most cases, the common names, such as "images" or "users" will not be available.

If you try to create a bucket with a name that already exists, you will get a 409 Conflict.


I only see 1000 objects in my bucket, what gives?

When you list the content of a bucket the results come back in blocks (currently 1000 keys or less). If the IsTruncated node is true in the response from Amazon S3, you'll need to make another request to get the rest of your keys. Your code must be prepared to handle this scenario.

If the IsTruncated flag is set, request the next page of results by setting Marker to the value of the NextMarker node from the last Amazon S3 response.


Why do some of my requests randomly fail with a 403 Forbidden?

Check the system clock and time zone settings on the offending machine. Amazon S3 requires all machines making requests be within 15 minutes of an Amazon S3 webserver's clock. Setting up your machines to sync their times with an NTP server in addition to making sure they are patched for the recent Day Light Savings changes should resolve this issue.

This is a common error when a developer decides to deploy their application to another machine.

The response from Amazon S3 will contain the following:

  • HTTP Status Code: 403 Forbidden
  • Error Code: RequestTimeToo-Skewed
  • Description: The difference between the request time and the server's time is too large.

How do I rename, copy, or move a bucket?

There is not currently a way to perform any of these operations on an existing bucket. Data in the existing bucket will need to be downloaded and then re-uploaded into a newly created bucket.

This is the same approach you will need to take if you want to move a bucket to a different locale.


Can I give anyone access to my bucket?

Yes, you can make your bucket publicly readable or even writable. Keep in mind, however, that you are responsible for the data that is both downloaded and uploaded to your buckets. Take care when giving someone 'write' access, as this user will have access to delete any object in the bucket.

You'll find more information about this topic in our technical documentation under the "Authentication and Access Control" section.


What characters are allowed in a bucket or object name?

A key is a sequence of Unicode characters whose UTF-8 encoding is at most 1024 bytes long.

Please make sure to review the "Bucket Restrictions and Limitations" section of the technical documentation.


How do I mimic a typical file system on Amazon S3?

You can mimic a file system hierarchy by using the Prefix and Delimiter parameters when you list a bucket. When you store your objects, create key names that correspond to a typical file system path.

For Example: the bucket "my-application" could include the following keys:

  • john/settings/conf.txt
  • jane/settings/conf.txt

While parameters do exist that will allow you to mimic a typical file system hierarchy, you should not think of Amazon S3 like you would a typical file system. Amazon S3 is a distributed system that will exhibit some behavior you may not be used to seeing.

For Example: it may take a few seconds for an object update to propagate to all parts of the system.


I need more than 100 buckets!

The bucket limit is something new users need to take in to account when architeching their applications. In most cases, users find they only require a small number of buckets after they have taken a look at how mimic a standard file system (see above).


How do I PUT an object larger than 1 MB via SOAP?

You have to use a DIME attachment when putting large objects via SOAP.

Microsoft .NET Framework users that want to use SOAP will need to install the WSE 2.0 package. It should be noted that WSE 3.0 (MTOM) is not yet supported by Amazon S3.


Why do trace routes and pings to s3.amazonaws.com fail?

The Amazon network blocks the packet types (ICMP) used by trace route utilities (so destination unreachable is expected behavior).

Providing the results of a trace route are still useful when troubleshooting network related issues.

Speaking generally, the route between any end-user endpoint and the Amazon network normally traverses many routers, most of which are not controlled by either Amazon or end users. Providing the results of a trace helps to determine if a user is experiencing issues related to one of these many routers.


Am I charged for unsuccessful requests to Amazon S3?

As our intent is to charge equitably for system resources used, we will be charging the owner of the bucket for 403s and 404s, since they consume system resources (as do all requests). Note that we will not be charging for requests which fail due to an Amazon S3 internal system error (all other requests will be billed).

If you suspect fraudulent or malicious access to your buckets, please contact us immediately and we'll work with you to resolve the issue.

You should not be getting charged when a request terminates with a 50x.


How can I cap my Amazon S3 usage?

We do not currently provide the ability to cap your usage of Amazon S3.

If you would like to track your usage via bucket and object logging, please review the "Server Access Logging" feature in the Amazon S3 Developer Guide.

http://docs.amazonwebservices.com/AmazonS3/2006-03-01/ServerLogs.html

This service provides a best effort attempt to log all access of objects within a bucket. Please note that it is possible that the actual usage report at the end of a month will slightly vary.

Your other option is to develop your own service that sits between users and Amazon S3, that monitors all requests to your buckets/objects.


Is Transfer-Encoding: chunked supported by Amazon S3?

Transfer-Encoding: chunked is not supported. The PUT operation must include a Content-Length header.


My request failed with 403 Forbidden / 400 Bad request / 409 Conflict ! What's wrong?

When debugging Amazon S3 error responses, be sure to look at the XML error document located in the body of the HTTP reply message. It contains extra information about the error not included in the HTTP headers. Some HTTP user agents don't show the HTTP message body by default, but this extra information is essential for understanding the problem with your request.


My requests are failing, what information do you need to help me?

In order to effectively troubleshoot issues related to Amazon S3, we typically need the following information:

  • Request details: method, Request-URI, all HTTP headers, and the request body (if making a SOAP request)
  • Response details: status code, all HTTP headers, and response body (which includes error details).

A user can obtain all of this information by logging the bits of a request on the wire before they leave their system, and by logging the bits from the Amazon server response on the wire before they reach their application.

Here are some example utilities available on the Internet for download that can be used to capture this data:

  1. tcpmon
  2. wireshark
  3. ethereal
  4. tcpdump

Comments

Retrive the Bucket objects
Hi, I have created two buckets and each buckets has few objects into it. When i am trying to retrieve the objects from the buckets it shows that the no objects are there in buckets. If any one can help me out with the simple steps to retrieve the objects and display the same through application.
transparentvalue on September 8, 2010 3:08 PM GMT
S3 account registration in Jungledisk
Hello, I signed up with Jungle Disk Desktop Edition to use Amazon S3 account. My online disk is up and running fine, but I wish to use Webdav for my Zotero research backups. In Jungledisk online account management, under the "amazon S3 accounts" and after entering my license keys, it keeps giving me the error message "You are not fully signed up for Amazon S3." When I try to add an account in the desktop client it gives me the error message "invalid credentials". I have tripple checked the copying and pasting of the keys from aws.amazon.com and still not letting me register. I went into Amazon aws and made sure that I enrolled in an S3 account, but still no luck setting up my Jungle Disk subscription plus S3 account in Jungledisk. I need to have a working webdav url to enter into Zotero to sync my research files. What piece of the puzzle and I missing? Best Regards, Daniel
Daniel Haynes on December 10, 2009 9:58 AM GMT
Geting "The underlying connection was closed: An unexpected error occurred on a receive." Error
Hi All, i am uploading images and videos using PutObject function of the amazon s3 web service. its working fine from my local PC or our server but when i am running the same services in the amazon server, i am getting following error "The underlying connection was closed: An unexpected error occurred on a receive." Please help me please...
memorypools on September 17, 2009 12:27 PM GMT
how to sync with Always Sync
Hi, I know that it is possible to Sync local folder on my PC with folder in Amazon S3 using software Always Sync. The first step is to create a folder in Amazon. And .. i can't figure out how to do this. Please advise! I've created an account, but don't understand how to create folders, upload files. nothing! Help! Thanks. email: m.stoieva@gmail.com
izidagaz on May 29, 2009 1:15 PM GMT
How can i save sql data in Amazon?
Hi friends i want to save data from sqlserver to amazon so that i can access directly from there through web service and how can i get path of web servic for particular data saved on amazon service?
sachin1234 on January 13, 2009 6:26 AM GMT
Not ready for general use
This has been a really frustrating experience. There are 3 or 4 S3 scripts floating around and none seem to work properly for me...they are buggy, outdated, offer incomplete or outdated tutorials, and have buggy Pear dependencies. I appreciate everyone's work on this but I can't believe Amazon doesn't support an official PHP client library for their product.
Gandalf on February 10, 2008 1:42 PM GMT
It is great
Thank....it is great ....very helpfull............
karmicktest1 on February 2, 2008 2:49 PM GMT
We are temporarily not accepting new comments.
©2014, Amazon Web Services, Inc. or its affiliates. All rights reserved.