Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 39

Question 1101

Exam Question

A company has created a VPC with multiple private subnets in multiple Availability Zones (AZs) and one public subnet in one of the AZs. The public subnet is used to launch a NAT Gateway. There is instance in the private subnet that use a NAT gateway to connect to the internet. In case of an AZ failure, the company wants to ensure that the instance is not all experiencing internet connectivity issues and that there is a backup plan ready.

Which solution should a solutions architect recommend that is MOST highly available?

A. Create a new public subnet with a NAT gateway in the same AZ. Distribute the traffic between the two NAT gateways.
B. Create an Amazon EC2 NAT instance in a now public subnet. Distribute the traffic between the NAT gateway and the NAT instance.
C. Create public subnets. In each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway.
D. Create an Amazon EC2 NAT instance in the same public subnet. Replace the NAT gateway with the NAT instance and associate the instance with an Auto Scaling group with an appropriate scaling policy.

Correct Answer

C. Create public subnets. In each AZ and launch a NAT gateway in each subnet. Configure the traffic from the private subnets in each AZ to the respective NAT gateway.

Explanation

To ensure high availability and redundancy for the instance in the private subnet, it is recommended to distribute the traffic between multiple NAT gateways in different AZs. This will prevent a single point of failure in case of an AZ failure.

Option A suggests creating a new public subnet with a NAT gateway in the same AZ. While this provides redundancy within the same AZ, it does not address the requirement for high availability in case of an AZ failure.

Option B suggests using an Amazon EC2 NAT instance in a separate public subnet. While this can provide redundancy, it requires additional management and configuration compared to using NAT gateways. NAT gateways are a managed service provided by AWS and are recommended over NAT instances.

Option D suggests using an Amazon EC2 NAT instance and associating it with an Auto Scaling group. While this provides redundancy, it still relies on a single NAT instance and requires manual scaling and configuration.

Option C, on the other hand, recommends creating public subnets in each AZ and launching a NAT gateway in each subnet. By distributing the traffic from the private subnets in each AZ to the respective NAT gateway, you ensure that each AZ has its own NAT gateway, providing high availability and redundancy in case of an AZ failure. This is the most highly available solution among the given options.

Question 1102

Exam Question

A company is using a VPC peering strategy to connect its VPCs in a single Region to allow for cross-communication. A recent increase in account creations and VPCs has made it difficult to maintain the VPC peering strategy, and the company expects to grow to hundreds of VPCs. There are also new requests to create site-to-site VPNs with some of the VPCs. A solutions architect has been tasked with creating a centrally networking setup for multiple accounts, VPCs, and VPNs.

Which networking solution meets these requirements?

A. Configure shared VPCs and VPNs and share to each other
B. Configure a hub-and-spoke and route all traffic through VPC peering.
C. Configure an AWS Direct Connect between all VPCs and VPNs.
D. Configure a transit gateway with AWS Transit Gateway and connected all VPCs and VPNs.

Correct Answer

D. Configure a transit gateway with AWS Transit Gateway and connected all VPCs and VPNs.

Explanation

To address the company’s requirements of maintaining a centrally managed networking setup for multiple accounts, VPCs, and VPNs, the recommended solution is to use AWS Transit Gateway.

Option A suggests configuring shared VPCs and VPNs and sharing them with each other. While shared VPCs can simplify management and allow for cross-communication, they may not scale well to accommodate the company’s expected growth to hundreds of VPCs. Additionally, shared VPCs do not directly address the requirement for site-to-site VPNs.

Option B suggests configuring a hub-and-spoke architecture and routing all traffic through VPC peering. While this can work for a smaller number of VPCs, it can become difficult to manage and scale as the number of VPCs grows. It also does not address the requirement for site-to-site VPNs.

Option C suggests configuring AWS Direct Connect between all VPCs and VPNs. While AWS Direct Connect provides a dedicated connection between on-premises networks and AWS, it can be complex and costly to set up and manage connections for a large number of VPCs and VPNs.

Option D, configuring a transit gateway with AWS Transit Gateway, is the recommended solution for this scenario. Transit Gateway simplifies network architecture by acting as a hub that connects multiple VPCs and VPNs. It provides a centralized and scalable solution for connecting VPCs and VPNs across accounts and Regions. With Transit Gateway, the company can easily manage and scale its networking setup, including hundreds of VPCs and site-to-site VPNs, while maintaining control and visibility over the network traffic.

Question 1103

Exam Question

A company is looking for a solution that can store video archives in AWS from old news footage. The company needs to minimize costs and will rarely need to restore these files. When the files are needed, they must be available in a maximum of five minutes.

What is the MOST cost-effective solution?

A. Store the video archives in Amazon S3 Glacier and use Expedited retrievals.
B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.
C. Store the video archives in Amazon S3 Standard-Infrequent Access (S3 Standard-IA).
D. Store the video archives in Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA).

Correct Answer

B. Store the video archives in Amazon S3 Glacier and use Standard retrievals.

Explanation

In this scenario, the company needs to minimize costs while ensuring that the video archives are available within a maximum of five minutes when needed. Amazon S3 Glacier is a suitable storage option for long-term archival of data with cost optimization. Glacier offers different retrieval options, including Expedited, Standard, and Bulk retrievals.

Expedited retrievals (Option A) in Amazon S3 Glacier provide the fastest access to archived data but come at a higher cost. Since the company rarely needs to restore these files, it’s not necessary to pay for expedited retrievals, which are typically used for more time-sensitive scenarios.

Standard retrievals (Option B) in Amazon S3 Glacier are suitable for the given requirements. Although the retrieval time is within several minutes, the cost is lower compared to expedited retrievals. This option provides a balance between availability and cost.

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) (Option C) is not the most cost-effective option in this scenario. S3 Standard-IA is designed for frequently accessed data that is stored for longer durations. The company mentioned that they rarely need to restore these files, so it would not be cost-effective to use S3 Standard-IA.

Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) (Option D) is not the most suitable option for this scenario. S3 One Zone-IA is designed for data that can be easily reproduced or that is not critical if lost. Since the video archives are valuable and need to be available within a specific timeframe, it’s recommended to use a more resilient storage option like Amazon S3 Glacier.

Therefore, storing the video archives in Amazon S3 Glacier and using Standard retrievals (Option B) would provide the most cost-effective solution while meeting the requirements of availability within a maximum of five minutes.

Question 1104

Exam Question

A company mandates that an Amazon S3 gateway endpoint must allow traffic to trusted buckets only.

Which method should a solutions architect implement to meet this requirement?

A. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s trusted VPCs.
B. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s S3 gateway endpoint IDs.
C. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that blocks access from any VPC other than the company’s trusted VPCs.
D. Create an S3 endpoint policy for each of the company’s S3 gateway endpoints that provides access to the Amazon Resource Name (ARN) of the trusted S3 buckets.

Correct Answer

B. Create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s S3 gateway endpoint IDs.

Explanation

To meet the requirement of allowing traffic to trusted buckets only through an S3 gateway endpoint, you can create bucket policies for each of the trusted S3 buckets. These bucket policies can specify that access is only allowed from the company’s S3 gateway endpoint IDs.

Option A suggests creating bucket policies that allow traffic only from the company’s trusted VPCs. However, the requirement specifically mentions allowing traffic through the S3 gateway endpoint, not from specific VPCs. Therefore, this option does not align with the requirement.

Option C suggests creating an S3 endpoint policy for each S3 gateway endpoint to block access from any VPC other than the company’s trusted VPCs. While this option may restrict access from unauthorized VPCs, it does not specifically address allowing traffic to trusted buckets only. It is better suited for controlling access to the S3 gateway endpoint itself, rather than limiting access to specific buckets.

Option D suggests creating an S3 endpoint policy for each S3 gateway endpoint that provides access to the ARN of the trusted S3 buckets. While this may provide access to the buckets, it does not enforce the requirement of allowing traffic to trusted buckets only through the S3 gateway endpoint. It lacks the specificity of allowing access only from the S3 gateway endpoint IDs.

Therefore, the most appropriate method to meet the requirement is to create a bucket policy for each of the company’s trusted S3 buckets that allows traffic only from the company’s S3 gateway endpoint IDs (Option B). This ensures that only traffic originating from the S3 gateway endpoint can access the trusted buckets.

Question 1105

Exam Question

A company built an application that lets users check in to places they visit, rank the places, and add reviews about their experiences. The application is successful with a rapid increase in the number of users every month. The chief technology officer fears the database supporting the current Infrastructure may not handle the new load the following month because the single AmazonRDS for MySQL instance has triggered alarms related to resource exhaustion due to read requests.

What can a solutions architect recommend to prevent service Interruptions at the database layer with minimal changes to code?

A. Create RDS read replicas and redirect read-only traffic to the read replica endpoints. Enable a Multi-AZ deployment.
B. Create an Amazon EMR cluster and migrate the data to a Hadoop Distributed File System (HDFS) with a replication factor of 3.
C. Create an Amazon ElastiCache cluster and redirect all read-only traffic to the cluster. Set up the cluster to be deployed in three Availability Zones.
D. Create an Amazon DynamoDB table to replace the RDS instance and redirect all read-only traffic to the DynamoDB table Enable DynamoDB Accelerator to offload traffic from the main table.

Correct Answer

A. Create RDS read replicas and redirect read-only traffic to the read replica endpoints. Enable a Multi-AZ deployment.

Explanation

To prevent service interruptions at the database layer and handle the increasing read requests, a solutions architect can recommend creating RDS read replicas and redirecting read-only traffic to the read replica endpoints. This solution leverages the scalability and high availability features of Amazon RDS.

By creating read replicas, the read workload can be distributed across multiple instances, alleviating the resource exhaustion on the single Amazon RDS for MySQL instance. The read replicas can handle the read requests, while the primary instance focuses on write operations.

Enabling Multi-AZ deployment ensures that the primary instance and read replicas are replicated in different Availability Zones (AZs), providing increased availability and fault tolerance. In case of a failure in one AZ, the read replicas in other AZs can continue serving read traffic.

This solution requires minimal changes to the application code as read traffic can be redirected to the read replica endpoints without modifying the existing codebase.

Option B suggests migrating the data to a Hadoop Distributed File System (HDFS) using Amazon EMR. This would involve significant changes to the data storage layer and the codebase, which may not be the most efficient solution for handling read traffic.

Option C suggests using Amazon ElastiCache to handle read traffic. However, ElastiCache is a caching service and may not be suitable for storing the primary data and handling the write operations. It would require changes to the data storage layer and codebase.

Option D suggests replacing the RDS instance with Amazon DynamoDB. While DynamoDB can handle high read and write workloads, migrating from a relational database like Amazon RDS for MySQL to a NoSQL database like DynamoDB would require significant changes to the data model and codebase.

Therefore, the most suitable recommendation in this scenario is to create RDS read replicas and redirect read-only traffic to the read replica endpoints, while enabling a Multi-AZ deployment (Option A). This solution can handle the increased read workload, improve scalability, and provide high availability with minimal changes to the existing code.

Question 1106

Exam Question

A company decides to migrate its three-tier web application from on-premises to the AWS Cloud. The new database must be capable of dynamically scaling storage capacity and performing table joins.

Which AWS service meets these requirements?

A. Amazon Aurora
B. Amazon RDS for Sql Server
C. Amazon DynamoDB Streams
D. Amazon DynamoDB on-demand

Correct Answer

A. Amazon Aurora

Explanation

To meet the requirements of dynamically scaling storage capacity and performing table joins, the most suitable AWS service is Amazon Aurora.

Amazon Aurora is a fully managed relational database service that is compatible with MySQL and PostgreSQL. It offers the benefits of high performance, scalability, and durability. Aurora allows you to dynamically scale storage capacity based on your application’s needs, eliminating the need for manual capacity planning.

In terms of table joins, Aurora supports complex relational database capabilities, including joining multiple tables, which makes it suitable for applications that require performing table joins.

Option B, Amazon RDS for SQL Server, is a managed relational database service specifically designed for Microsoft SQL Server. While it offers scalability and performance features, it does not provide the same level of dynamic storage scaling as Amazon Aurora.

Option C, Amazon DynamoDB Streams, is a feature of Amazon DynamoDB that provides a time-ordered sequence of item-level modifications. It is not directly related to dynamically scaling storage capacity or performing table joins.

Option D, Amazon DynamoDB on-demand, is a flexible pricing option for Amazon DynamoDB that charges you based on the actual reads and writes consumed by your application. It does not specifically address the requirement for dynamically scaling storage capacity or performing table joins.

Therefore, the most appropriate AWS service for this scenario is Amazon Aurora (Option A). It offers dynamic storage scaling and supports complex table joins, making it a suitable choice for migrating the three-tier web application to the AWS Cloud.

Question 1107

Exam Question

An application runs on Amazon EC2 instances in private subnets. The application needs to access an Amazon DynamoDB table.

What is the MOST secure way to access the table while ensuring that the traffic does not leave the AWS network?

A. Use a VPC endpoint for DynamoDB.
B. Use a NAT gateway in a public subnet.
C. Use a NAT instance in a private subnet.
D. Use the internet gateway attached to the VPC.

Correct Answer

A. Use a VPC endpoint for DynamoDB.

Explanation

The most secure way to access an Amazon DynamoDB table from EC2 instances in private subnets while ensuring that the traffic does not leave the AWS network is to use a VPC endpoint for DynamoDB.

A VPC endpoint enables you to privately connect your VPC to supported AWS services without requiring an internet gateway, NAT gateway, or NAT instance. It allows traffic to flow between your VPC and the DynamoDB service over the Amazon network backbone, without going over the public internet.

By creating a VPC endpoint for DynamoDB and configuring your EC2 instances to access DynamoDB through this endpoint, you can securely access the DynamoDB table without the need for internet gateways or NAT gateways/instances. This ensures that the traffic remains within the AWS network and does not traverse the public internet, providing enhanced security.

Option B, using a NAT gateway in a public subnet, would route the traffic from the private subnet to the DynamoDB service through the public internet, which is not desirable in this case where the goal is to keep the traffic within the AWS network.

Option C, using a NAT instance in a private subnet, has a similar drawback as option B. It would involve routing the traffic through the NAT instance, which requires the instance to be deployed in a public subnet and introduces complexity and potential security risks.

Option D, using the internet gateway attached to the VPC, would route the traffic through the public internet, which is not ideal for keeping the traffic within the AWS network.

Therefore, the most secure and appropriate option is to use a VPC endpoint for DynamoDB (Option A) to ensure that the traffic stays within the AWS network while accessing the DynamoDB table from EC2 instances in private subnets.

Question 1108

Exam Question

A company requires a durable backup storage solution for its on-premises database servers while ensuring on-premises applications maintain access to these backups for quick recovery. The company will use AWS storage services as the destination for these backups. A solutions architect is designing a solution with minimal operational overhead.

Which solution should the solutions architect implement?

A. Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.
B. Back up the databases to an AWS Storage Gateway volume gateway and access it using the Amazon S3 API.
C. Transfer the database backup files to an Amazon Elastic Block Store (Amazon EBS) volume attached to an Amazon EC2 instance.
D. Back up the database directly to an AWS Snowball device and use lifecycle rules to move the data to Amazon S3 Glacier Deep Archive.

Correct Answer

A. Deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.

Explanation

To meet the requirements of providing a durable backup storage solution for on-premises database servers while ensuring quick access for recovery, a suitable solution is to deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket.

AWS Storage Gateway is a hybrid cloud storage service that enables on-premises applications to seamlessly integrate with AWS storage services. The file gateway configuration provides file-level access to objects stored in Amazon S3. By deploying the file gateway on-premises and associating it with an S3 bucket, the company can back up the database files to the on-premises file gateway, which then stores the data in Amazon S3.

This solution offers durability for the backup storage, as Amazon S3 provides a highly durable and scalable storage service. The backup files are securely stored in Amazon S3, ensuring their integrity and long-term retention.

Additionally, the file gateway allows on-premises applications to maintain access to these backups for quick recovery. It presents the backup files stored in Amazon S3 as network file shares, which can be accessed by on-premises applications without any major changes. This ensures a seamless and quick recovery process for the on-premises applications.

Option B, using an AWS Storage Gateway volume gateway, would provide block-level access to data stored in Amazon S3, which may not be suitable for on-premises applications requiring file-level access.

Option C, transferring the database backup files to an Amazon EBS volume attached to an EC2 instance, introduces additional complexity and operational overhead. It may not provide the same level of durability and seamless integration with AWS storage services as the Storage Gateway file gateway.

Option D, backing up the database directly to an AWS Snowball device and using lifecycle rules to move the data to Amazon S3 Glacier Deep Archive, is not the most suitable solution in this case, as it involves physical data transfer and may not provide the same level of operational efficiency as using an on-premises file gateway.

Therefore, the recommended solution is to deploy an AWS Storage Gateway file gateway on-premises and associate it with an Amazon S3 bucket (Option A). This solution provides a durable backup storage solution while ensuring on-premises applications maintain quick access to the backups with minimal operational overhead.

Question 1109

Exam Question

A company’s operations teams have an existing Amazon S3 bucket configured to notify an Amazon SQS queue when a new object is created within the bucket. The development team also wants to receive events when new objects are created. The existing operations team workflow must remain intact.

Which solution would satisfy these requirements?

A. Create another SQS queue. Update the S3 events in bucket to also update the new queue when a new object is created.
B. Create a new SQS queue that only allows Amazon S3 to access the queue. Update Amazon S3 update this queue when a new object is created.
C. Create an Amazon SNS topic and SQS queue for the Update. Update the bucket to send events to the new topic. Updates both queues to poll Amazon SNS.
D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic Add subscription for both queue in the topic.

Correct Answer

D. Create an Amazon SNS topic and SQS queue for the bucket updates. Update the bucket to send events to the new topic Add subscription for both queue in the topic.

Explanation

To satisfy the requirements of allowing both the operations team and the development team to receive events when new objects are created in the Amazon S3 bucket, a suitable solution is to create an Amazon SNS topic and SQS queues.

Here’s how the solution can be implemented:

  1. Create an Amazon SNS topic: Create a new Amazon SNS topic that will serve as the central hub for publishing events.
  2. Create SQS queues: Create two SQS queues – one for the operations team and another for the development team. These queues will receive the events published by the SNS topic.
  3. Configure bucket event notification: Update the Amazon S3 bucket configuration to send events (e.g., object creation events) to the newly created SNS topic. This ensures that new object creation events in the bucket are published to the SNS topic.
  4. Subscribe SQS queues to the SNS topic: Add subscriptions for both SQS queues in the SNS topic. This allows the queues to receive the events published by the SNS topic. Both the operations team and the development team will receive the events in their respective queues.

With this solution, when a new object is created in the S3 bucket, the bucket sends an event to the SNS topic. The SNS topic, in turn, publishes the event to both SQS queues. The operations team can continue their existing workflow by polling their SQS queue, and the development team can receive the events in their own SQS queue.

Option A suggests creating another SQS queue and updating the S3 events in the bucket to send events to the new queue. This approach would work for the development team but would not maintain the existing operations team workflow.

Option B suggests creating a new SQS queue that only allows Amazon S3 to access it. While this solution would work for the development team, it would not fulfill the requirement of maintaining the existing operations team workflow.

Option C suggests creating an SNS topic and an SQS queue for updates, but it does not address the requirement of maintaining the existing operations team workflow and separating it from the development team’s workflow.

Therefore, the recommended solution is to create an Amazon SNS topic and SQS queue for the bucket updates, update the bucket to send events to the new topic, and add subscriptions for both queues in the topic (Option D). This solution satisfies the requirements by allowing both teams to receive events while maintaining the existing operations team workflow.

Question 1110

Exam Question

A company is using a tape backup solution to store its key application data offsite. The daily data volume is around 50 TB. The company needs to retain the backups for 7 years for regulatory purposes. The backups are rarely accessed and a week’s notice is typically given if a backup needs to be restored. The company is now considering a cloud-based option to reduce the storage costs and operational burden of managing tapes. The company also wants to make sure that the transition from tape backups to the cloud minimizes disruptions.

Which storage solution is MOST cost-effective?

A. Use Amazon Storage Gateway to backup to Amazon Glacier Deep Archive.
B. Use AWS Snowball Edge to directly integrate the backups with Amazon S3 Glacier.
C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier.
D. Use Amazon Storage Gateway to backup to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier.

Correct Answer

C. Copy the backup data to Amazon S3 and create a lifecycle policy to move the data to Amazon S3 Glacier.

Explanation

To achieve a cost-effective cloud-based storage solution for the company’s backup data, the recommended approach is to copy the backup data to Amazon S3 and create a lifecycle policy to transition the data to Amazon S3 Glacier.

Here’s how this solution meets the requirements:

  1. Copy data to Amazon S3: Initially, the backup data can be copied directly to Amazon S3, which provides durable and scalable object storage.
  2. Lifecycle policy: Create a lifecycle policy for the Amazon S3 bucket to automatically transition the backup data to Amazon S3 Glacier after a certain period of time. In this case, since the company needs to retain the backups for 7 years, the lifecycle policy can be configured accordingly.

By implementing this solution, the company can benefit from the following:

  • Cost-effective storage: Amazon S3 provides cost-effective storage for the backup data compared to maintaining tape backups. The costs are optimized based on the storage class used, such as Amazon S3 Glacier, which offers low-cost archival storage.
  • Reduced operational burden: With the transition to the cloud, the company can eliminate the operational burden of managing physical tapes, including transportation, storage, and maintenance.
  • Durability and availability: Amazon S3 ensures the durability and availability of the backup data, with built-in redundancy and replication across multiple facilities.
  • Seamless transition: By copying the backup data to Amazon S3 and creating a lifecycle policy, the transition from tape backups to the cloud can be performed with minimal disruption. The data can still be accessed when needed, but the storage costs are optimized by moving it to Amazon S3 Glacier.

Option A suggests using Amazon Storage Gateway to backup to Amazon Glacier Deep Archive. While this would provide a cost-effective archival storage solution, it may not be the most suitable option for directly transitioning from tape backups.

Option B suggests using AWS Snowball Edge to integrate the backups with Amazon S3 Glacier. While this provides a direct integration option, it may introduce additional complexities and operational overhead compared to the recommended solution.

Option D suggests using Amazon Storage Gateway to backup to Amazon S3 and create a lifecycle policy to move the backup to Amazon S3 Glacier. This option is close to the recommended solution but involves additional components (Storage Gateway) that may not be necessary for achieving the desired outcome.

Therefore, the most cost-effective solution in this scenario is to copy the backup data to Amazon S3 and create a lifecycle policy to transition the data to Amazon S3 Glacier (Option C).

The post AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 39 appeared first on PUPUWEB - Information Resource for Emerging Technology Trends and Cybersecurity.



This post first appeared on PUPUWEB - Information Resource For Emerging Technology Trends And Cybersecurity, please read the originial post: here

Share the post

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 39

×

Subscribe to Pupuweb - Information Resource For Emerging Technology Trends And Cybersecurity

Get updates delivered right to your inbox!

Thank you for your subscription

×