Javascript required
Skip to content Skip to sidebar Skip to footer

Wix Upload a File to S3 With Sdk

When working with Amazon S3 (Simple Storage Service), you lot're probably using the S3 web panel to download, re-create, or upload files to S3 buckets. Using the console is perfectly fine, that'due south what it was designed for, to brainstorm with.

Especially for admins who are used to more mouse-click than keyboard commands, the web panel is probably the easiest. However, admins will eventually encounter the need to perform bulk file operations with Amazon S3, like an unattended file upload. The GUI is not the all-time tool for that.

For such automation requirements with Amazon Web Services, including Amazon S3, the AWS CLI tool provides admins with command-line options for managing Amazon S3 buckets and objects.

In this article, y'all will learn how to use the AWS CLI command-line tool to upload, copy, download, and synchronize files with Amazon S3. You will also learn the nuts of providing access to your S3 bucket and configure that access profile to piece of work with the AWS CLI tool.

Prerequisites

Since this a how-to article, there will be examples and demonstrations in the succeeding sections. For y'all to follow along successfully, you will need to meet several requirements.

  • An AWS account. If you don't have an existing AWS subscription, yous tin sign up for an AWS Free Tier.
  • An AWS S3 bucket. You tin use an existing bucket if yous'd adopt. Still, it is recommended to create an empty bucket instead. Please refer to Creating a saucepan.
  • A Windows 10 computer with at least Windows PowerShell 5.1. In this article, PowerShell 7.0.2 volition be used.
  • The AWS CLI version 2 tool must be installed on your computer.
  • Local folders and files that you will upload or synchronize with Amazon S3

Preparing Your AWS S3 Access

Suppose that yous already have the requirements in place. You lot'd retrieve you can already go and start operating AWS CLI with your S3 bucket. I hateful, wouldn't it be nice if it were that unproblematic?

For those of you who are just offset to work with Amazon S3 or AWS in general, this section aims to help you set upward access to S3 and configure an AWS CLI profile.

The total documentation for creating an IAM user in AWS tin be found in this link below. Creating an IAM User in Your AWS Account

Creating an IAM User with S3 Access Permission

When accessing AWS using the CLI, you will need to create one or more than IAM users with enough access to the resources you intend to work with. In this section, yous volition create an IAM user with admission to Amazon S3.

To create an IAM user with admission to Amazon S3, yous first need to login to your AWS IAM console. Under the Access management group, click on Users. Side by side, click on Add user.

IAM Users Menu
IAM Users Card

Blazon in the IAM user's name yous are creating within the User name* box such every bit s3Admin. In the Access type* selection, put a check on Programmatic admission. Then, click the Next: Permissions button.

Set IAM user details
Fix IAM user details

Next, click on Attach existing policies direct. And then, search for the AmazonS3FullAccess policy name and put a cheque on information technology. When done, click on Next: Tags.

Assign IAM user permissions
Assign IAM user permissions

Creating tags is optional in the Add tags page, and you can just skip this and click on the Next: Review button.

IAM user tags
IAM user tags

In the Review folio, you are presented with a summary of the new business relationship being created. Click Create user.

IAM user summary
IAM user summary

Finally, one time the user is created, you must copy the Admission key ID and the Underground access central values and salvage them for later on user. Note that this is the only fourth dimension that you tin can come across these values.

IAM user key credentials
IAM user key credentials

Setting Upwardly an AWS Profile On Your Computer

Now that you've created the IAM user with the appropriate access to Amazon S3, the side by side stride is to prepare up the AWS CLI profile on your estimator.

This section assumes that you already installed the AWS CLI version two tool every bit required. For the profile cosmos, you will demand the following information:

  • The Access key ID of the IAM user.
  • The Secret access key associated with the IAM user.
  • The Default region proper noun is corresponding to the location of your AWS S3 bucket. You tin check out the list of endpoints using this link. In this article, the AWS S3 bucket is located in the Asia Pacific (Sydney) region, and the respective endpoint is ap-southeast-two.
  • The default output format. Use JSON for this.

To create the profile, open up PowerShell, and blazon the command beneath and follow the prompts.

Enter the Access key ID, Clandestine access cardinal, Default region name, and default output name. Refer to the demonstration beneath.

Configure an AWS CLI profile
Configure an AWS CLI profile

Testing AWS CLI Access

After configuring the AWS CLI profile, you can confirm that the profile is working by running this command below in PowerShell.

The command higher up should list the Amazon S3 buckets that yous have in your account. The demonstration beneath shows the command in action. The result shows that listing of available S3 buckets indicates that the profile configuration was successful.

List S3 buckets
Listing S3 buckets

To larn virtually the AWS CLI commands specific to Amazon S3, you lot tin visit the AWS CLI Control Reference S3 page.

Managing Files in S3

With AWS CLI, typical file management operations can be done like upload files to S3, download files from S3, delete objects in S3, and copy S3 objects to another S3 location. It'due south all simply a matter of knowing the right control, syntax, parameters, and options.

In the following sections, the surroundings used is consists of the following.

  • Two S3 buckets, namely atasync1and atasync2. The screenshot below shows the existing S3 buckets in the Amazon S3 console.
List of available S3 bucket names in the Amazon S3 console
List of bachelor S3 saucepan names in the Amazon S3 panel
  • Local directory and files located nether c:\sync.
Local Directory
Local Directory

Uploading Individual Files to S3

When you upload files to S3, y'all can upload one file at a time, or by uploading multiple files and folders recursively. Depending on your requirements, you lot may choose one over the other that yous deem advisable.

To upload a file to S3, you'll need to provide two arguments (source and destination) to the aws s3 cp command.

For example, to upload the file c:\sync\logs\log1.xml to the root of the atasync1 bucket, you can utilise the command below.

            aws s3 cp c:\sync\logs\log1.xml s3://atasync1/          

Note: S3 saucepan names are ever prefixed with S3:// when used with AWS CLI

Run the above command in PowerShell, only modify the source and destination that fits your environment first. The output should await similar to the demonstration below.

Upload file to S3
Upload file to S3

The demo in a higher place shows that the file named c:\sync\logs\log1.xml was uploaded without errors to the S3 destination s3://atasync1/.

Utilize the command below to list the objects at the root of the S3 bucket.

Running the control above in PowerShell would result in a similar output, as shown in the demo below. Every bit you lot can see in the output below, the file log1.xml is present in the root of the S3 location.

List the uploaded file in S3
List the uploaded file in S3

Uploading Multiple Files and Folders to S3 Recursively

The previous section showed you how to copy a single file to an S3 location. What if you need to upload multiple files from a folder and sub-folders? Surely yous wouldn't want to run the same control multiple times for different filenames, right?

The aws s3 cp command has an choice to process files and folders recursively, and this is the --recursive choice.

As an example, the directory c:\sync contains 166 objects (files and sub-folders).

The folder containing multiple files and sub-folders
The folder containing multiple files and sub-folders

Using the --recursive option, all the contents of the c:\sync folder volition exist uploaded to S3 while also retaining the folder construction. To test, utilise the case lawmaking below, but make sure to change the source and destination appropriate to your environs.

You'll notice from the code below, the source is c:\sync, and the destination is s3://atasync1/sync. The /sync key that follows the S3 saucepan proper noun indicates to AWS CLI to upload the files in the /sync folder in S3. If the /sync folder does not exist in S3, information technology volition be automatically created.

            aws s3 cp c:\sync s3://atasync1/sync --recursive          

The lawmaking above will result in the output, equally shown in the demonstration beneath.

Upload multiple files and folders to S3
Upload multiple files and folders to S3

Uploading Multiple Files and Folders to S3 Selectively

In some cases, uploading ALL types of files is non the best selection. Similar, when you only demand to upload files with specific file extensions (e.g., *.ps1). Some other ii options available to the cp command is the --include and --exclude.

While using the command in the previous section includes all files in the recursive upload, the command below will include only the files that friction match *.ps1 file extension and exclude every other file from the upload.

            aws s3 cp c:\sync s3://atasync1/sync --recursive --exclude * --include *.ps1          

The demonstration below shows how the code in a higher place works when executed.

Upload files that matched a specific file extension
Upload files that matched a specific file extension

Another example is if you want to include multiple different file extensions, you will need to specify the --include option multiple times.

The example control below volition include only the *.csv and *.png files to the copy control.

            aws s3 cp c:\sync s3://atasync1/sync --recursive --exclude * --include *.csv --include *.png          

Running the code in a higher place in PowerShell would present you with a similar result, as shown beneath.

Upload files with multiple include options
Upload files with multiple include options

Downloading Objects from S3

Based on the examples yous've learned in this section, you tin besides perform the copy operations in reverse. Meaning, you can download objects from the S3 bucket location to the local auto.

Copying from S3 to local would require you to switch the positions of the source and the destination. The source existence the S3 location, and the destination is the local path, similar the ane shown below.

            aws s3 cp s3://atasync1/sync c:\sync          

Note that the same options used when uploading files to S3 are also applicable when downloading objects from S3 to local. For instance, downloading all objects using the command beneath with the --recursive choice.

            aws s3 cp s3://atasync1/sync c:\sync --recursive          

Copying Objects Between S3 Locations

Apart from uploading and downloading files and folders, using AWS CLI, yous tin can likewise copy or movement files between ii S3 saucepan locations.

You'll detect the command below using i S3 location equally the source, and some other S3 location as the destination.

            aws s3 cp s3://atasync1/Log1.xml s3://atasync2/          

The demonstration beneath shows you lot the source file beingness copied to another S3 location using the control in a higher place.

Copy objects from one S3 location to another S3 location
Re-create objects from one S3 location to another S3 location

Synchronizing Files and Folders with S3

You've learned how to upload, download, and copy files in S3 using the AWS CLI commands so far. In this section, you'll larn most one more file operation command available in AWS CLI for S3, which is the sync command. The sync control only processes the updated, new, and deleted files.

There are some cases where you need to keep the contents of an S3 saucepan updated and synchronized with a local directory on a server. For example, you may have a requirement to keep transaction logs on a server synchronized to S3 at an interval.

Using the command below, *.XML log files located under the c:\sync binder on the local server will be synced to the S3 location at s3://atasync1.

            aws s3 sync C:\sync\ s3://atasync1/ --exclude * --include *.xml          

The sit-in below shows that after running the command above in PowerShell, all *.XML files were uploaded to the S3 destination s3://atasync1/.

Synchronizing local files to S3
Synchronizing local files to S3

Synchronizing New and Updated Files with S3

In this side by side example, information technology is assumed that the contents of the log file Log1.xml were modified. The sync command should pick upward that modification and upload the changes done on the local file to S3, as shown in the demo below.

The command to utilise is still the same as the previous case.

Synchronizing changes to S3
Synchronizing changes to S3

As yous can meet from the output above, since only the file Log1.xml was changed locally, it was also the just file synchronized to S3.

Synchronizing Deletions with S3

By default, the sync command does not procedure deletions. Whatsoever file deleted from the source location is non removed at the destination. Well, non unless you use the --delete option.

In this next example, the file named Log5.xml has been deleted from the source. The command to synchronize the files will exist appended with the --delete pick, as shown in the code below.

            aws s3 sync C:\sync\ s3://atasync1/ --exclude * --include *.xml --delete          

When yous run the control above in PowerShell, the deleted file named Log5.xml should likewise exist deleted at the destination S3 location. The sample result is shown beneath.

Synchronize file deletions to S3
Synchronize file deletions to S3

Summary

Amazon S3 is an excellent resource for storing files in the deject. With the use of the AWS CLI tool, the manner you utilize Amazon S3 is further expanded and opens the opportunity to automate your processes.

In this article, you've learned how to use the AWS CLI tool to upload, download, and synchronize files and folders between local locations and S3 buckets. You've also learned that S3 buckets' contents can also be copied or moved to other S3 locations, likewise.

There tin can be many more than use-instance scenarios for using the AWS CLI tool to automate file management with Amazon S3. You tin even try to combine it with PowerShell scripting and build your own tools or modules that are reusable. Information technology is up to you lot to detect those opportunities and testify off your skills.

Further Reading

  • What Is the AWS Command Line Interface?
  • What is Amazon S3?
  • How To Sync Local Files And Folders To AWS S3 With The AWS CLI

mcbeathshumed.blogspot.com

Source: https://adamtheautomator.com/upload-file-to-s3/