...
Parallel uploads can be accomplished with the AWS CLI tool to better utilize the available bandwidth and improve performance. Ideally, you'll have some understanding of the dataset so that you can divide the files into equal portions. One approach is to use the --include and --exclude options to address mutually exclusive subsets. For example, you may create separate screen sessions and run different aws commands in each :the following command will start a copy of all files in /home/user/downloads/ that start with the letter B, which will be sent to the background then start another cp of all files that don't start with "B":session. The following commands will start two copies in different screen sessions. One will copy everything that starts with the letter 'B' while the other copies everything that doesn't start with 'B'. Another approach may be to simply sync each top level directory separately assuming they're somewhat equal in size.
Code Block | ||
---|---|---|
| ||
[user@localhost ~]$ screen -S awscopy1 -d -m aws –endpoint-url <endpoint-url> s3 cp /home/user/downloads/ s3://mybucket/mydownloads/ --recursive --exclude "*" --include "B*" [user@localhost ~]$ screen -S awscopy2 -d -m aws –endpoint-url <endpoint-url> s3 cp /home/user/downloads/ s3://mybucket/mydownloads/ --recursive --exclude "B*" |
...