AWS CLI

The AWS Command Line Interface is a tool that can be used to interact with Amazon Web Services in a command-line shell, but only the S3 commands are relevant to the USS S3 gateway.

The AWS CLI tool is available for Windows, macOS and Linux. The steps below show how the tool can be installed and used on CentOS Linux. Installation and usage instructions for other operating systems can be found here:

Install awscli

[user@localhost ~]$ sudo yum install awscli

Configure awscli

[user@localhost ~]$ aws configure
AWS Access Key ID [None]: <username>
AWS Secret Access Key [None]: <password>
Default region name [None]: [Enter]
Default output format [None]: [Enter]

Run awscli

create a new bucket:

[user@localhost ~]$ aws –endpoint-url <endpoint-url> s3 mb s3://<bucket-name>

list all buckets:

[user@localhost ~]$ aws –endpoint-url <endpoint-url> s3 ls

synchronize a bucket with a specific prefix to a local directory purging any files from the bucket/prefix that are no longer in the local directory:

[user@localhost ~]$ aws –endpoint-url <endpoint-url> s3 sync /home/user/downloads/ s3://mybucket/mydownloads/ --delete

generate a presigned URL to share an object for a limited time (the command will return a link that can be shared):

[user@localhost ~]$ aws –endpoint-url <endpoint-url> s3 presign s3://mybucket/mydownloads/myobject --expires-in 3600

More examples can be found here:

Access Buckets Anonymously

If a bucket policy was set to allow access without credentials, the awscli utility can be run with the '--no-sign-request' option to issue the commands without loading credentials. Note: If the bucket policy includes a specific prefix other than '/' the prefix will need to provided with the request:

[user@localhost ~]$ aws –endpoint-url <endpoint-url> s3 ls s3://mybucket/publicobjects/ --no-sign-request

Optimizing Transfers

Parallel uploads can be accomplished with the AWS CLI tool to better utilize the available bandwidth and improve performance. Ideally, you'll have some understanding of the dataset so that you can divide the files into equal portions. One approach is to use the --include and --exclude options to address mutually exclusive subsets. For example, you may create separate screen sessions and run different aws commands in each session. The following commands will start two copies in different screen sessions. One will copy everything that starts with the letter 'B' while the other copies everything that doesn't start with 'B'. Another approach may be to simply sync each top level directory separately assuming they're somewhat equal in size. 

[user@localhost ~]$ screen -S awscopy1 -d -m aws –endpoint-url <endpoint-url> s3 cp /home/user/downloads/ s3://mybucket/mydownloads/ --recursive --exclude "*" --include "B*"
[user@localhost ~]$ screen -S awscopy2 -d -m aws –endpoint-url <endpoint-url> s3 cp /home/user/downloads/ s3://mybucket/mydownloads/ --recursive --exclude "B*"