Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 33

Question 1041

Exam Question

A recent analysis of a company’s IT expenses highlights the need to reduce backup costs. The company’s chief information officer wants to simplify the on- premises backup infrastructure and reduce costs by eliminating the use of physical backup tapes. The company must preserve the existing investment in the on- premises backup applications and workflows.

What should a solutions architect recommend?

A. Set up AWS Storage Gateway to connect with the backup applications using the NFS interface.
B. Set up an Amazon EFS file system that connects with the backup applications using the NFS interface.
C. Set up an Amazon EFS file system that connects with the backup applications using the iSCSI interface.
D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.

Correct Answer

D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.

Explanation

A solutions architect should recommend:

D. Set up AWS Storage Gateway to connect with the backup applications using the iSCSI-virtual tape library (VTL) interface.

By setting up AWS Storage Gateway with the iSCSI-virtual tape library (VTL) interface, the company can eliminate the use of physical backup tapes and preserve the existing investment in on-premises backup applications and workflows.

AWS Storage Gateway provides a hybrid cloud storage solution that enables on-premises applications to seamlessly and securely access cloud storage. The VTL interface allows existing backup applications that use tape-based workflows to work with virtual tapes stored in Amazon S3 or Amazon Glacier.

By leveraging AWS Storage Gateway’s VTL interface, the company can replace physical backup tapes with virtual tapes stored in Amazon S3 or Amazon Glacier, reducing backup costs and simplifying the on-premises backup infrastructure.

Option A (Setting up AWS Storage Gateway with the NFS interface) and Option B (Setting up an Amazon EFS file system with the NFS interface) do not preserve the existing investment in the on-premises backup applications and workflows.

Option C (Setting up an Amazon EFS file system with the iSCSI interface) does not provide the necessary integration with the existing backup applications, as it uses a different interface.

Therefore, the most appropriate solution in this scenario is to set up AWS Storage Gateway with the iSCSI-virtual tape library (VTL) interface.

Question 1042

Exam Question

A company is developing a new machine learning model solution in AWS. The models are developed as independent microservices that fetch about 1 GB of model data from Amazon S3 at startup and load the data into memory. Users access the models through an asynchronous API. Users can send a request or a batch of requests and specify where the results should be sent. The company provides models to hundreds of users. The usage patterns for the models are irregular Some models could be unused for days or weeks. Other models could receive batches of thousands of requests at a time.

Which solution meets these requirements?

A. The requests from the API are sent to an Application Load Balancer (ALB). Models are deployed as Aws Lambda Functions invoked by the ALB.
B. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size.
C. The requests from the API are sent to the model Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS App Mesh scales the instances of the ECS cluster based on the SQS queue size.
D. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as Amazon Elastic Container Service (Amazon ECS) services reading from the queue AWS Auto Scaling is enabled on Amazon ECS for both the cluster and copies of the service based on the queue size.

Correct Answer

B. The requests from the API are sent to the models Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size.

Explanation

The most appropriate solution for the given requirements would be:

B. The requests from the API are sent to the models’ Amazon Simple Queue Service (Amazon SQS) queue. Models are deployed as AWS Lambda functions triggered by SQS events. AWS Auto Scaling is enabled on Lambda to increase the number of vCPUs based on the SQS queue size.

In this solution, the requests from the API are sent to an SQS queue, which decouples the API from the models and provides asynchronous processing. The models are implemented as AWS Lambda functions that are triggered by SQS events. When the number of messages in the SQS queue increases, AWS Auto Scaling automatically increases the number of concurrent executions of the Lambda functions to handle the workload.

This solution is well-suited for irregular usage patterns and varying request loads. It allows the system to scale automatically based on the number of messages in the SQS queue, ensuring efficient utilization of resources.

Option A, deploying models as AWS Lambda functions invoked by an Application Load Balancer (ALB), may not be suitable for irregular usage patterns and does not provide the decoupling and asynchronous processing capabilities offered by SQS.

Option C, deploying models as Amazon Elastic Container Service (Amazon ECS) services reading from the SQS queue, introduces additional complexity compared to using Lambda functions. Lambda functions are serverless and automatically managed, requiring less operational overhead.

Option D, deploying models as Amazon ECS services reading from the SQS queue, along with AWS Auto Scaling enabled for both the cluster and the service, can work but introduces more management and operational complexity compared to using Lambda functions.

Therefore, the recommended solution is to use AWS Lambda functions triggered by SQS events with AWS Auto Scaling enabled on Lambda based on the SQS queue size.

Question 1043

Exam Question

A company currently stores symmetric encryption keys in a hardware security module (HSM). A solution architect must design a solution to migrate key management to AWS. The solution should allow for key rotation and support the use of customer provided keys.

Where should the key material be stored to meet these requirements?

A. Amazon S3
B. AWS Secrets Manager
C. AWS Systems Manager Parameter store
D. AWS Key Management Service (AWS KMS)

Correct Answer

D. AWS Key Management Service (AWS KMS)

Explanation

To meet the requirements of key rotation and support for customer provided keys, the key material should be stored in:

D. AWS Key Management Service (AWS KMS).

AWS Key Management Service (AWS KMS) is a managed service that enables you to create and control the encryption keys used to encrypt your data. AWS KMS provides a secure and scalable platform for key management, including key generation, rotation, and storage. It supports key encryption with customer provided keys (CMKs) and allows for easy key rotation to enhance security. With AWS KMS, you can centrally manage and control access to your encryption keys while benefiting from the scalability and durability of the AWS infrastructure.

Options A, B, and C (Amazon S3, AWS Secrets Manager, and AWS Systems Manager Parameter Store) are not designed for secure key storage and management. While these services have their own use cases, they are not suitable for meeting the specific requirements of key management, rotation, and support for customer provided keys.

Question 1044

Exam Question

A company stores call recordings on a monthly basis. Statistically, the recorded data may be referenced randomly within a year but accessed rarely after 1 year. Files that are newer than 1 year old must be queried and retrieved as quickly as possible. A delay in retrieving older files is acceptable. A solutions architect needs to store the recorded data at a minimal cost.

Which solution is MOST cost-effective?

A. Store individual files in Amazon S3 Glacier and store search metadata in object tags created in S3 Glacier Query S3 Glacier tags and retrieve the files from S3 Glacier.
B. Store individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after1 year. Query and retrieve the files from Amazon S3 or S3 Glacier.
C. Archive individual files and store search metadata for each archive in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year. Query and retrieve the files by searching for metadata from Amazon S3.
D. Archive individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year. Store search metadata in Amazon DynamoDB. Query the files from DynamoDB and retrieve them from Amazon S3 or S3 Glacier.

Correct Answer

B. Store individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after1 year. Query and retrieve the files from Amazon S3 or S3 Glacier.

Explanation

The MOST cost-effective solution for storing call recordings based on the given requirements would be:

B. Store individual files in Amazon S3. Use lifecycle policies to move the files to Amazon S3 Glacier after 1 year. Query and retrieve the files from Amazon S3 or S3 Glacier.

By storing the files in Amazon S3 initially and using lifecycle policies to transition them to Amazon S3 Glacier after 1 year, you can take advantage of the cost savings provided by Amazon S3 Glacier’s lower storage costs for long-term storage. This approach allows you to access and retrieve newer files quickly from Amazon S3, which is optimized for frequent and immediate access. Older files can be retrieved from Amazon S3 Glacier, where the retrieval time may be longer but acceptable based on the given requirements.

Options A, C, and D involve additional services or architectures that may add complexity and cost without providing significant benefits in this scenario. Storing files directly in Amazon S3 and leveraging its lifecycle policies to transition them to Amazon S3 Glacier provides a simple and cost-effective solution while meeting the access requirements.

Question 1045

Exam Question

An application running on an Amazon EC2 instance in VPC-A needs to access files in another EC2 instance in VPC-B. Both are in separate AWS accounts. The network administrator needs to design a solution to enable secure access to EC2 instances in VPC-B from VPC-A. The connectivity should not have a single point of failure or bandwidth concerns.

Which solution will meet these requirements?

A. Set up a VPC peering connection between VPC-A and VPC-B.
B. Set up VPC gateway endpoints for the EC2 instance running in VPC-B.
C. Attach a virtual private gateway to VPC-B and enable routing from VPC-A.
D. Create a private virtual interface (VIF) for the EC2 instance running in VPC-B and add appropriate routes from VPC-B.

Correct Answer

A. Set up a VPC peering connection between VPC-A and VPC-B.

Explanation

To meet the requirements of enabling secure access to an EC2 instance in VPC-B from VPC-A, without a single point of failure or bandwidth concerns, the recommended solution is:

A. Set up a VPC peering connection between VPC-A and VPC-B.

VPC peering allows secure communication between instances in different VPCs using private IP addresses. It establishes a direct network connection between the VPCs, enabling seamless communication without the need for internet access or a VPN connection. By setting up a VPC peering connection, you can establish secure connectivity between the EC2 instance in VPC-A and the EC2 instance in VPC-B.

Option B, setting up VPC gateway endpoints, is used for accessing AWS services (such as S3 or DynamoDB) privately from a VPC without needing to traverse the public internet. It is not directly applicable to enabling communication between EC2 instances in separate VPCs.

Option C, attaching a virtual private gateway to VPC-B and enabling routing from VPC-A, is used for establishing a VPN connection between VPCs. While it can provide connectivity, it introduces a single point of failure and potential bandwidth limitations.

Option D, creating a private virtual interface (VIF) for the EC2 instance in VPC-B, is used for connecting to an on-premises network or a remote network via AWS Direct Connect. It is not necessary for communication between EC2 instances within different VPCs.

Therefore, the most suitable solution is to set up a VPC peering connection (option A) to securely connect the EC2 instances in VPC-A and VPC-B.

Question 1046

Exam Question

A company is building a payment application that must be highly available even during regional service disruptions. A solutions architect must design a data storage solution that can be easily replicated and used in other AWS Regions. The application also requires low-latency atomicity, consistency, isolation, and durability (ACID) transactions that need to be immediately available to generate reports The development team also needs to use SQL.

Which data storage solution meets these requirements?

A. Amazon Aurora Global Database
B. Amazon DynamoDB global tables
C. Amazon S3 with cross-Region replication and Amazon Athena
D. MySQL on Amazon EC2 instances with Amazon Elastic Block Store (Amazon EBS) snapshot replication

Correct Answer

A. Amazon Aurora Global Database

Explanation

To meet the requirements of a highly available data storage solution that can be easily replicated across AWS Regions, supports low-latency ACID transactions, and allows the use of SQL, the recommended solution is:

A. Amazon Aurora Global Database

Amazon Aurora Global Database is a distributed relational database service that provides high availability and low latency replication across multiple AWS Regions. It is designed to replicate data with minimal replication lag, ensuring that data is immediately available across Regions. Amazon Aurora also supports ACID transactions and offers compatibility with MySQL and PostgreSQL, allowing the development team to use SQL for their application.

Option B, Amazon DynamoDB global tables, is a NoSQL database service that supports multi-master replication across multiple AWS Regions. While it provides high availability, it may not fulfill the requirement of using SQL for the application.

Option C, using Amazon S3 with cross-Region replication and Amazon Athena, is primarily for object storage and querying data using SQL-like queries. It may not provide the low-latency ACID transactions required by the payment application.

Option D, using MySQL on Amazon EC2 instances with Amazon EBS snapshot replication, requires manual replication and configuration across multiple Regions, making it more complex and less suitable for highly available and easily replicated data storage.

Therefore, the most appropriate data storage solution that meets the given requirements is Amazon Aurora Global Database (option A).

Question 1047

Exam Question

A company recently launched its website to serve content to its global user base. The company wants to store and accelerate the delivery of static content to its users by leveraging Amazon CloudFront with an Amazon EC2 instance attached as its origin.

How should a solutions architect optimize high availability for the application?

A. Use Lambda@Edge for CloudFront.
B. Use Amazon S3 Transfer Acceleration for CloudFront.
C. Configure another EC2 instance in a different Availability Zone as part of the origin group.
D. Configure another EC2 instance as part of the origin server cluster in the same Availability Zone.

Correct Answer

C. Configure another EC2 instance in a different Availability Zone as part of the origin group.

Explanation

To optimize high availability for the application while using Amazon CloudFront with an Amazon EC2 instance as the origin, the recommended solution is:

C. Configure another EC2 instance in a different Availability Zone as part of the origin group.

By configuring another EC2 instance in a different Availability Zone as part of the origin group, you ensure that if one EC2 instance or Availability Zone becomes unavailable, CloudFront can automatically route requests to the available instance in the alternate Availability Zone. This helps improve the high availability of the application by distributing the workload across multiple instances and ensuring continuous content delivery to users even in the event of a failure.

Option A, using Lambda@Edge for CloudFront, is primarily used for executing serverless functions at edge locations, such as modifying responses, generating dynamic content, or routing requests. While it can enhance functionality, it is not directly related to optimizing high availability.

Option B, using Amazon S3 Transfer Acceleration for CloudFront, is a feature that enhances the transfer speed of data from Amazon S3 to users. While it can improve performance, it is not specifically designed to optimize high availability.

Option D, configuring another EC2 instance as part of the origin server cluster in the same Availability Zone, does not provide the same level of high availability as having instances in different Availability Zones. If the Availability Zone experiences an outage, the application may still be affected.

Therefore, the most suitable option for optimizing high availability in this scenario is to configure another EC2 instance in a different Availability Zone as part of the origin group (option C).

Question 1048

Exam Question

A recently created startup built a three-tier web application. The front end has static content. The application layer is based on microservices. User data is stored as JSON documents that need to be accessed with low latency. The company expects regular traffic to be low during the first year, with peaks in traffic when it publicizes new features every month. The startup team needs to minimize operational overhead costs.

What should a solutions architect recommend to accomplish this?

A. Use Amazon S3 static website hosting to store and serve the front end. Use AWS Elastic Beanstalk for the application layer. Use Amazon DynamoDB to store user data.
B. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon Elastic KubernetesService (Amazon EKS) for the application layer. Use Amazon DynamoDB to store user data.
C. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon DynamoDB to store user data.
D. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon RDS with read replicas to store user data.

Correct Answer

C. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon DynamoDB to store user data.

Explanation

To accomplish the requirements of a three-tier web application with low latency access to user data, regular low traffic, and minimal operational overhead costs for a startup, the recommended solution is:

C. Use Amazon S3 static website hosting to store and serve the front end. Use Amazon API Gateway and AWS Lambda functions for the application layer. Use Amazon DynamoDB to store user data.

Here’s the rationale for this choice:

Amazon S3 static website hosting: Storing and serving the front end using Amazon S3 is a cost-effective and scalable solution for hosting static content. It allows you to easily distribute the static content globally with low latency.

Amazon API Gateway and AWS Lambda functions: Using Amazon API Gateway in combination with AWS Lambda functions provides a serverless architecture for the application layer. This approach offers scalability, low operational overhead, and cost efficiency. Lambda functions can handle the microservices logic, and API Gateway acts as the entry point for the application, allowing you to manage and control the APIs easily.

Amazon DynamoDB: DynamoDB is a NoSQL database service that offers low-latency access to JSON documents. It can efficiently store and retrieve user data with high performance and automatic scaling. DynamoDB is designed for fast and predictable performance, making it suitable for applications with low latency requirements.

This solution eliminates the need for managing and provisioning infrastructure directly, reducing operational overhead costs. It leverages serverless services, allowing automatic scaling to handle traffic peaks during feature publicizations. The combination of S3, API Gateway, Lambda, and DynamoDB offers a scalable, low-latency, and cost-effective architecture for the startup’s web application.

Question 1049

Exam Question

A company is investigating potential solutions that would collect, process, and store users’ service usage data. The business objective is to create an analytics capability that will enable the company to gather operational insights quickly using standard SQL queries. The solution should be highly available and ensure Atomicity, Consistency, Isolation, and Durability (ACID) compliance in the data tier.

Which solution should a solutions architect recommend?

A. Use Amazon DynamoDB transactions
B. Create an Amazon Neptune database in a Multi AZ design
C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design
D. Deploy PostgreSQL on an Amazon EC2 instance that uses Amazon EBS Throughput Optimized HDD storage.

Correct Answer

C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design

Explanation

To meet the requirements of collecting, processing, and storing users’ service usage data while enabling quick operational insights using standard SQL queries and ensuring ACID compliance, the recommended solution is:

C. Use a fully managed Amazon RDS for MySQL database in a Multi-AZ design.

Here’s the rationale for this choice:

  1. Amazon RDS for MySQL: Amazon RDS is a fully managed database service that simplifies the deployment and management of MySQL databases. It provides high availability, automated backups, automated software patching, and monitoring capabilities. RDS supports standard SQL queries, making it suitable for the company’s analytics capability.
  2. Multi-AZ design: By choosing a Multi-AZ deployment for Amazon RDS, you ensure high availability and durability. Multi-AZ automatically replicates data to a standby instance in a different Availability Zone, providing failover capabilities in case of infrastructure issues or maintenance events.
  3. ACID compliance: Amazon RDS for MySQL supports ACID properties, which ensure transactional consistency and durability. This is crucial for maintaining data integrity and reliability in the data tier.
  4. Standard SQL queries: Amazon RDS for MySQL is compatible with standard SQL queries, allowing the company to leverage their existing knowledge and tools for data analytics. SQL-based queries provide a familiar and powerful way to extract insights from the collected service usage data.

Using Amazon RDS for MySQL in a Multi-AZ design provides a highly available and durable data storage solution that meets the ACID compliance requirements. The managed nature of RDS reduces operational overhead, allowing the company to focus on analytics and operational insights rather than infrastructure management.

Question 1050

Exam Question

A solutions architect is planning the deployment of a new static website. The solution must minimize costs and provide at least 99% availability.

Which solution meets these requirements?

A. Deploy the application to an Amazon S3 bucket in one AWS Region that has versioning disabled.
B. Deploy the application to Amazon EC2 instances that run in two AWS Regions and two Availability Zones.
C. Deploy the application to an Amazon S3 bucket that has versioning and cross-Region replication enabled.
D. Deploy the application to an Amazon EC2 instance that runs in one AWS Region and one Availability Zone.

Correct Answer

A. Deploy the application to an Amazon S3 bucket in one AWS Region that has versioning disabled.

Explanation

The solution that meets the requirements of minimizing costs and providing at least 99% availability is:

A. Deploy the application to an Amazon S3 bucket in one AWS Region that has versioning disabled.

Here’s the rationale for this choice:

  1. Amazon S3 bucket: Storing the static website in an S3 bucket is a cost-effective option as S3 provides durable and scalable object storage. It eliminates the need for managing and maintaining EC2 instances, reducing operational overhead.
  2. Versioning disabled: Disabling versioning in the S3 bucket helps minimize storage costs by storing only the latest version of each object. Since it’s a static website, there is no need for versioning as there won’t be frequent updates or changes to individual files.
  3. Single AWS Region: Deploying the website in one AWS Region helps simplify the architecture and minimize costs by avoiding the need for cross-Region data replication. This is suitable when the availability requirement is 99% or above, as it allows for planned maintenance windows or possible short downtime.
  4. 99% availability: Amazon S3 provides a highly available service with durability guarantees. While S3’s SLA (Service Level Agreement) specifies 99.9% availability, deploying the static website in a single AWS Region can still achieve a high level of availability, with the occasional planned maintenance windows or brief interruptions.

By deploying the static website to an S3 bucket in a single AWS Region without versioning enabled, the solution minimizes costs and provides the required level of availability. It simplifies the deployment and management of the website, allowing the solutions architect to focus on the core functionality of the application rather than infrastructure management.

The post AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 33 appeared first on PUPUWEB - Information Resource for Emerging Technology Trends and Cybersecurity.



This post first appeared on PUPUWEB - Information Resource For Emerging Technology Trends And Cybersecurity, please read the originial post: here

Share the post

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 33

×

Subscribe to Pupuweb - Information Resource For Emerging Technology Trends And Cybersecurity

Get updates delivered right to your inbox!

Thank you for your subscription

×