Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

Securing the Connection to S3 from EC2

Posted on Sep 7 • Originally published at newsletter.simpleaws.dev You've deployed your app on an EC2 instance, and there's a file in an S3 Bucket that you need to access from the app. You created a public S3 bucket and uploaded the file, and it works! But then you read somewhere that keeping your private files in a public S3 bucket is a bad idea, so you set out to fix it.Here's the initial setup, and you can deploy it here:Deploy initial setupThis is what it looks like before the solution:This is what it looks like with the solution:Open the CloudFormation consoleSelect the initial state stackClick the Outputs tabCopy the value for EC2InstancePublicIpPaste it in the browser, append :3000 and hit Enter/ReturnGo to the VPC consoleIn the panel on the left, click EndpointsClick Create EndpointEnter a nameIn the Services section, enter S3 in the search box, and select the one that says 'com.amazonaws.your_region.s3' (replace 'your_region' with the region where you deployed the initial setup, which is where the S3 bucket is). Then select the one that says Interface in the Type column.For VPC, select SimpleAWSVPC from the dropdown listUnder Subnets, select us-east-1a and us-east-1b, and for each click the dropdown and select the only available subnetUnder Security groups, select the one called VPCEndpointSecurityGroupUnder Policy, pick Full Access for now (we'll change that in Step 2).Open Additional settingsCheck Enable DNS nameUncheck Enable private DNS only for inbound endpointClick Create endpointIn the Amazon VPC console, go to EndpointsSelect the Endpoint you just createdClick the Policy tabClick Edit PolicyModify the following JSON by replacing the placeholder values REPLACE_BUCKET_NAME and REPLACE_VPC_ID with the name of your S3 bucket and the ID of SimpleAWSVPC. Then paste it into the Edit Policy page, and click Save.Open the S3 consoleClick on the bucket that you created with the initial setupClick on the Permissions tabScroll down to Bucket Policy and click EditPaste the following policy, replacing the placeholders REPLACE_BUCKET_NAME and REPLACE_VPC_ENDPOINT_ID with their values (REPLACE_VPC_ENDPOINT_ID is not the same as REPLACE_VPC_ID from the previous step). Then click Save changesBefore deleting the CloudFormation stack, you'll need to empty the S3 bucket! The Node.js app puts a file in there.First of all, you'll notice that a VPC Endpoint is for one specific service, S3 in this case. If you wanted to connect to other services you'd need to create a separate VPC Endpoint for each different service.The second thing you'll notice is that there are 2 types of endpoints: Interface and Gateway. Gateway endpoints are only for S3 and DynamoDB, while Interface endpoints are for nearly everything. Gateway endpoints are simpler, so use them when you can (except if you're writing a newsletter and want to show a few things about Interface endpoints).Interface endpoints work by creating an Elastic Network Interface in every subnet where you deploy it, and automatically routing to that ENI the traffic that's addressed to the public endpoint of the service. That way, you don't need to make any changes to the code. This only works if you check Enable DNS name.The existing policy is a Full Access policy, which is the default policy when a VPC endpoint is created. It allows all actions on the S3 service from anyone.Instead of that, we're setting up a more restrictive policy, which only allows access to our specific bucket, and denies access to all other buckets.VPC Endpoint policies are IAM resource policies, and as such, anything that's not explicitly allowed is implicitly denied.Bucket policies are another type of IAM resource policies. Obviously, this bucket policy will only apply to our S3 bucket. It's important to add it because, while we've restricted what the VPC Endpoint can be used for, the S3 bucket can still be accessed from outside the VPC (e.g. from the public internet). This bucket policy is the one that's going to prevent that, restricting access to only from the VPC Endpoint.In this case I kept internet access for the VPC and for the EC2 instance itself, just to make it easier to trigger the code with an HTTP request. This solution is a good idea in these cases because traffic to S3 doesn't go over the public internet, but admittedly, the public internet is a viable alternative.Where this solution matters more is when you don't have access to the internet. Sure, adding it is rather simple, but you're either exposing yourself unnecessarily by giving your instances a public IP address they don't need, or you're paying for a NAT Gateway. In those cases, VPC Endpoints are a much simpler, safer and cheaper solution.Conceptually, you can think of this as giving the S3 service a private IP address inside your VPC. In reality, what you're doing is creating a private IP address in your VPC that leads to the S3 service, so that conception is pretty accurate! Behind the scenes (and you can see this easily), the VPC service creates an Elastic Network Interface (ENI) in every subnet where you deploy the VPC Endpoint. Those ENIs will forward the traffic to the S3 service endpoints that are private to the AWS network.Also, behind the scenes there's a Route 53 Private Hosted Zone that you can't see, but which resolves the S3 address to the private IPs of those ENIs, instead of to the public IPs of the public endpoints. That's why you don't need to change the code: Your code depends on the address of the S3 service, and that private hosted zone takes care of resolving it to a different address. You can't see this private hosted zone, it's managed by AWS and hidden from users.Least Privilege Access to Bucket: This is basically what we did in Step 3: We disabled public access, and implemented a policy that only allows reads from the VPC. Try reading from that S3 bucket from your own computer: aws s3api get-object --bucket 12ewqaewr2qqq --key thankyou.txt thankyou.txt --region us-east-1Regularly Audit IAM Policies: Regularly review and tighten your IAM policies. Not only for the VPC Endpoint and S3 bucket, but also for the EC2 instance!Stop copying cloud solutions, start understanding them. Join over 3000 devs, tech leads, and experts learning how to architect cloud solutions, not pass exams, with the Simple AWS newsletter.Real scenarios and solutionsThe why behind the solutionsBest practices to improve themSubscribe for freeIf you'd like to know more about me, you can find me on LinkedIn or at www.guilleojeda.comTemplates let you quickly answer FAQs or store snippets for re-use. Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment's permalink. Hide child comments as well Confirm For further actions, you may consider blocking this person and/or reporting abuse Hidetaka Okamoto - Sep 5 Ricardo Sueiras - Sep 5 Ricardo Sueiras - Sep 4 Scofield Idehen - Sep 4 Would you like to become an AWS Community Builder? Learn more about the program and apply to join when applications are open next. Once suspended, aws-builders will not be able to comment or publish posts until their suspension is removed. Once unsuspended, aws-builders will be able to comment and publish posts again. Once unpublished, all posts by aws-builders will become hidden and only accessible to themselves. If aws-builders is not suspended, they can still re-publish their posts from their dashboard. Note: Once unpublished, this post will become invisible to the public and only accessible to Guille Ojeda. They can still re-publish the post if they are not suspended. Thanks for keeping DEV Community safe. Here is what you can do to flag aws-builders: aws-builders consistently posts content that violates DEV Community's code of conduct because it is harassing, offensive or spammy. Unflagging aws-builders will restore default visibility to their posts. DEV Community — A constructive and inclusive social network for software developers. With you every step of your journey. Built on Forem — the open source software that powers DEV and other inclusive communities.Made with love and Ruby on Rails. DEV Community © 2016 - 2023. We're a place where coders share, stay up-to-date and grow their careers.



This post first appeared on VedVyas Articles, please read the originial post: here

Share the post

Securing the Connection to S3 from EC2

×

Subscribe to Vedvyas Articles

Get updates delivered right to your inbox!

Thank you for your subscription

×