How to Set Up an SFTP Server with S3

Written by thisguymartin | Published 2022/01/21
Tech Story Tags: aws | sftp | s3 | aws-services | serverless | ftp | iam | aws-s3-bucket

TLDRvia the TL;DR App

This is a pretty straightforward blog and personal documentation on how to set up an SFTP server with s3 bucket as the backend of the SFTP server. I hope you find this helpful as I did.

Step 1.

Create S3 Bucket from the AWS console. The S3 Bucket will be the place where the files will stay. No custom changes will need to be made to the s3 bucket itself.

Step 2.

Navigate to AWS Transfer Family service and select create Server. Where you will be prompted to select FTP, FTPS, SFTP. Select SFTP, which stands for SSH File Transfer Protocol. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream.

Step 3.

Next, we need to select an identity provider, which in short is an access layer that will allow certain users to access the SFTP server. You can do this programmatically via lambda and add your own custom identity provider, or in this case, just create a single user at a time use the AWS managed service

Step 4.

Next, we select and set up our own custom DNS hostname. In this case, we are just going to use AWS generated host that is provided. If you would like this could be modified in the future.

Options:

  • None: AWS will create a DNS record such as {some-random-generated-string}.server.transfer.{region}.amazonaws.com.
  • Route 53 Alias: the service will create an alias on Route 53 so you can use a more - friendly name and a subdomain such as ftps.martinpatino.com .
  • Other DNS: 3rd party DNS server option.

Step. 5

Just select a domain, which will provide 2 options, here we will just go with Amazon S3 as we will provide user access to an s3 bucket with limited permissions to a specific bucket.

Once this step is completed, you will be prompted to review your server summary and confirm, in which your SFTP server will be generated.

Step. 6

Configuration of IAM Roles with S3

So after we have created the SFTP server and created the S3 bucket that you would like the user to have access. The next part is to handle user role permissions and policy creation. In our case, we want to restrict users to only being able to view a specific bucket. So just head to IAM create a custom SFTP role for your user in AWS under the service use case of Transfer.

You can copy and paste this and modify the custom-bucket-name field.

This rule will give the user access to delete, fetch, update and add files to your s3 bucket via FTP.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::{custom-bucket-name}"
            ],
            "Effect": "Allow",
            "Sid": "ReadWriteS3"
        },
        {
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion",
                "s3:GetObjectVersion",
                "s3:GetObjectACL",
                "s3:PutObjectACL"
            ],
            "Resource": [
                "arn:aws:s3:::{custom-bucket-name}/*"
            ],
            "Effect": "Allow",
            "Sid": ""
        }
    ]

Step. 7

To create a user you will need a username, s3 bucket you want the user to have access to, and the role they are associated with, which would be the one that was created above. Once you bind your new SFTP_COMPANY_ROLE or whatever you called it to the user. In my example below, I am just calling user usercompany_a, but you can call it whatever you want. You will then have the option to set up a policy, which is not required. However, it will help lock down your user to only be able to have access to a specific directory under a certain bucket.

AWS Transfer does have an auto-generate policy if you would like to use it, which would look like the example below.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowListingOfUserFolder",
      "Action": [
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::${transfer:HomeBucket}"
      ],
      "Condition": {
        "StringLike": {
          "s3:prefix": [
            "${transfer:HomeFolder}/*",
            "${transfer:HomeFolder}"
          ]
        }
      }
    },
    {
      "Sid": "HomeDirObjectAccess",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:GetObjectVersion"
      ],
      "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
    }
  ]
}

The final step is add SSH Public Key, which the user should provide to you. The user will the use their own private key to connect to the SFTP via some SFTP client. This is a required field as without it. Your users will not be able to connect.

The last step is to add the user’s Public Key. They will need their Private Key to connect to the SFTP Server. If you don’t add the Public Key, the users can connect without any credentials.

Test Time

It’s time to test the SFTP connection. I use CyberDuck, which is available on both Mac/Windows. Try to connect and if everything is setup correctly, you should be able to see the content in you S3 bucket. You should even be able to upload files from your SFTP client.

Success!!!

Hope you enjoy learning a bit!

First Published here


Written by thisguymartin | I am a software engineer at Sibi focusing on partner integrations. Find me at https://bio.link/thisguymartin
Published by HackerNoon on 2022/01/21