Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 28

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 991

Exam Question

Organizers for a global event want to put daily reports online as static HTML pages. The pages are expected to generate millions of views from users around the world. The files are stored in an Amazon S3 bucket. A solutions architect has been asked to design an efficient and effective solution.

Which action should the solutions architect take to accomplish this?

A. Generate pre-signed URLs for the files.
B. Use cross-Region replication to all Regions.
C. Use the geo-proximity feature of Amazon Route 53.
D. Use Amazon CloudFront with the S3 bucket as its origin.

Correct Answer

D. Use Amazon CloudFront with the S3 bucket as its origin.

Explanation

To efficiently and effectively serve static HTML pages stored in an Amazon S3 bucket to users worldwide, the solutions architect should take the following action:

D. Use Amazon CloudFront with the S3 bucket as its origin.

Amazon CloudFront is a global content delivery network (CDN) service that caches content at edge locations around the world, reducing latency and improving the performance of content delivery. By configuring CloudFront with the S3 bucket as its origin, the static HTML pages can be cached at edge locations closest to the users, resulting in faster access and reduced load on the S3 bucket.

This solution ensures that the static HTML pages can be served with low latency and high availability, even for users located in different regions around the world. CloudFront automatically routes the requests to the nearest edge location, minimizing the distance and network latency between the users and the content.

Option A, generating pre-signed URLs, would require additional authentication and authorization mechanisms and may not be suitable for serving millions of views.

Option B, cross-Region replication, is not necessary for this scenario as the objective is to efficiently serve static HTML pages globally, not replicate the files across different regions.

Option C, using the geo-proximity feature of Amazon Route 53, is not the most efficient solution for serving static files. While Route 53 can route users to the nearest endpoint based on their geographic location, it doesn’t provide the caching and edge optimization capabilities of CloudFront.

Therefore, option D (Use Amazon CloudFront with the S3 bucket as its origin) is the recommended action to efficiently and effectively serve the static HTML pages globally.

Question 992

Exam Question

A company is migrating its applications to AWS. Currently, applications that run on premises generate hundreds of terabytes of data that is stored on a shared file system. The company is running an analytics application in the cloud that runs hourly to generate insights from this data. The company needs a solution to handle the ongoing data transfer between the on-premises shared file system and Amazon S3. The solution also must be able to handle occasional interruptions in internet connectivity.

Which solutions should the company use for the data transfer to meet these requirements?

A. AWS DataSync
B. AWS Migration Hub
C. AWS Snowball Edge Storage Optimized
D. AWS Transfer for SFTP

Correct Answer

A. AWS DataSync

Explanation

To handle the ongoing data transfer between the on-premises shared file system and Amazon S3, including occasional interruptions in internet connectivity, the company should use:

A. AWS DataSync.

AWS DataSync is a data transfer service that simplifies and accelerates moving large amounts of data between on-premises storage systems and AWS storage services, including Amazon S3. DataSync is designed to handle intermittent network connectivity and can automatically resume transfers when connectivity is restored. It provides efficient, secure, and fast data transfer with built-in data validation and encryption capabilities.

Option B, AWS Migration Hub, is not the appropriate solution for ongoing data transfer and is mainly used for tracking and managing application migrations.

Option C, AWS Snowball Edge Storage Optimized, is a physical device used for large-scale data transfer where the data is initially loaded onto the device and then shipped to AWS. It is not suitable for ongoing data transfer and occasional interruptions in internet connectivity.

Option D, AWS Transfer for SFTP, is a fully managed service that enables the transfer of files over the Secure File Transfer Protocol (SFTP) directly into Amazon S3. However, it does not provide the same level of flexibility and performance as AWS DataSync for ongoing data transfer and handling intermittent connectivity.

Therefore, the most suitable solution for the company’s data transfer requirements, including occasional interruptions in internet connectivity, is to use AWS DataSync (option A).

Question 993

Exam Question

A solutions architect is deploying a distributed database on multiple Amazon EC2 instances. The database stores all data on multiple instances so it can withstand the loss of an instance. The database requires block storage with latency and throughput to support several million transactions per second per server.

Which storage solution should the solutions architect use?

A. Amazon EBS
B. Amazon EC2 instance store
C. Amazon EFS
D. Amazon S3

Correct Answer

B. Amazon EC2 instance store

Explanation

Based on the requirement for block storage with low latency and high throughput to support several million transactions per second per server, the most suitable storage solution for the distributed database is:

B. Amazon EC2 instance store.

Amazon EC2 instance store provides block-level storage that is directly attached to the EC2 instances. It offers low-latency and high-throughput access to data and is optimized for performance. The data stored on the instance store is ephemeral and does not persist if the instance is stopped or terminated. However, since the database is designed to store all data on multiple instances to withstand the loss of an instance, the data redundancy and high availability requirements can be achieved by replicating the data across multiple instances.

Amazon EBS (option A) is a persistent block storage option that can be attached to EC2 instances, but it may not provide the same level of performance and throughput as the EC2 instance store.

Amazon EFS (option C) is a managed file storage service that provides network file system (NFS) storage. While it offers scalability and shared access across multiple instances, it may not provide the required level of low latency and high throughput for the database workload.

Amazon S3 (option D) is an object storage service that is not suitable for block-level storage and does not provide the necessary performance characteristics for the database workload.

Therefore, the most appropriate storage solution for the distributed database in this scenario is Amazon EC2 instance store (option B).

Question 994

Exam Question

A company recently launched Linux-based application instances on Amazon EC2 in a private subnet and launched a Linux-based bastion host on an Amazon EC2 instance in a public subnet of a VPC. A solutions architect needs to connect from the on-premises network, through the company internet connection, to the bastion host, and to the application servers. The solutions architect must make sure that the security groups of all the EC2 instances will allow that access.

Which combination of steps should the solutions architect take to meet these requirements? (Choose two.)

A. Replace the current security group of the bastion host with one that only allows inbound access from the application instances.
B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company.
C. Replace the current security group of the bastion host with one that only allows inbound access from the external IP range for the company.
D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host.
E. Replace the current security group of the application instances with one that allows inbound SSH access from only the public IP address of the bastion host.

Correct Answer

B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company.

Explanation

To enable connectivity from the on-premises network, through the company internet connection, to the bastion host, and to the application servers, the following steps should be taken:

B. Replace the current security group of the bastion host with one that only allows inbound access from the internal IP range for the company. This ensures that only connections from the internal IP range of the company’s network are allowed to access the bastion host.

D. Replace the current security group of the application instances with one that allows inbound SSH access from only the private IP address of the bastion host. By configuring the security group of the application instances to allow inbound SSH access specifically from the private IP address of the bastion host, you ensure that only traffic originating from the bastion host is allowed to reach the application instances.

These steps help establish a secure and controlled connection flow, allowing access from the company’s internal IP range to the bastion host and then from the bastion host to the application instances.

Question 995

Exam Question

A company is migrating a three-tier application to AWS. The application requires a MySQL database. In the past, the application users reported poor application performance when creating new entries. These performance issues were caused by users generating different real-time reports from the application during working hours.

Which solution will improve the performance of the application when it is moved to AWS?

A. Import the data into an Amazon DynamoDB table with provisioned capacity. Refactor the application to use DynamoDB for reports.
B. Create the database on a compute optimized Amazon EC2 instance. Ensure compute resources exceed the on-premises database.
C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application reader endpoint for reports.
D. Create an Amazon Aurora MySQL Multi-AZ DB cluster. Configure the application to use the backup instance of the cluster as an endpoint for the reports.

Correct Answer

C. Create an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas. Configure the application reader endpoint for reports.

Explanation

By creating an Amazon Aurora MySQL Multi-AZ DB cluster with multiple read replicas, you improve the performance of the application when generating reports. The read replicas offload the read traffic from the primary database instance, allowing for better scalability and performance. The application can be configured to use the reader endpoint, which automatically routes read requests to the appropriate read replica, further distributing the load and improving performance during working hours when generating reports.

Question 996

Exam Question

A development team is deploying a new product on AWS and is using AWS Lambda as part of the deployment. The team allocates 512 MB of memory for one of the Lambda functions. With this memory allocation, the function is completed in 2 minutes. The function runs millions of times monthly, and the development team is concerned about cost. The team conducts tests to see how different Lambda memory allocations affect the cost of the function.

Which steps will reduce the Lambda costs for the product? (Choose two.)

A. Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 1 minute.
B. Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 90 seconds.
C. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 4 minutes.
D. Increase the memory allocation for this Lambda function to 2,048 MB if this change causes the execution time of each function to be less than 1 minute.
E. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 5 minutes.

Correct Answer

A. Increase the memory allocation for this Lambda function to 1,024 MB if this change causes the execution time of each function to be less than 1 minute.
C. Reduce the memory allocation for this Lambda function to 256 MB if this change causes the execution time of each function to be less than 4 minutes.

Explanation

The cost of AWS Lambda is based on the allocated memory and the duration of each function execution. By adjusting the memory allocation, you can optimize the cost of the Lambda function while maintaining acceptable execution times.

To reduce the Lambda costs:
A. Increase the memory allocation to 1,024 MB if the execution time of each function can be reduced to less than 1 minute. By increasing the memory allocation, the function will have more CPU power available, potentially allowing it to finish faster. If the execution time is reduced to less than 1 minute, the cost will be lower because Lambda billing is calculated based on the duration in 100 ms increments.

C. Reduce the memory allocation to 256 MB if the execution time of each function can be reduced to less than 4 minutes. By reducing the memory allocation, you will pay less for the allocated memory. If the execution time is still within an acceptable range (less than 4 minutes), this can lead to cost savings.

It’s important to note that the optimal memory allocation and execution time may vary depending on the specific workload and requirements of the application. Conducting tests and monitoring the performance and cost implications of different memory allocations is recommended to find the most cost-effective configuration.

Question 997

Exam Question

A product team is creating a new application that will store a large amount of data. The data will be analyzed hourly and modified by multiple Amazon EC2 Linux instances. The application team believes the amount of space needed will continue to grow for the next 6 months.

Which set of actions should a solutions architect take to support these needs?

A. Store the data in an Amazon EBS volume. Mount the EBS volume on the application instances.
B. Store the data in an Amazon EFS file system. Mount the file system on the application instances.
C. Store the data in Amazon S3 Glacier. Update the vault policy to allow access to the application instances.
D. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Update the bucket policy to allow access to the application instances.

Correct Answer

B. Store the data in an Amazon EFS file system. Mount the file system on the application instances.
D. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Update the bucket policy to allow access to the application instances.

Explanation

Given that the data will be analyzed and modified by multiple EC2 instances, and the storage requirements are expected to grow over the next 6 months, using Amazon EFS and Amazon S3 Standard-IA would be suitable options.

B. Storing the data in an Amazon EFS file system allows for shared access to the data across multiple EC2 instances. With EFS, you can easily mount the file system on the application instances and have concurrent read and write access to the data. As the data volume grows, EFS automatically scales to accommodate the increasing storage needs.

D. Storing the data in Amazon S3 Standard-IA (Standard-Infrequent Access) provides a cost-effective storage solution for data that is accessed less frequently but requires immediate availability when needed. By updating the bucket policy, the application instances can securely access and modify the data stored in S3. This allows the application to leverage the durability and scalability of S3 while optimizing costs based on the frequency of data access.

Both EFS and S3 Standard-IA offer scalability, durability, and cost-effectiveness, making them suitable options for storing and accessing large amounts of data with multiple EC2 instances. The choice between them depends on factors such as the specific requirements of the application, access patterns, and cost considerations.

Question 998

Exam Question

A company hosts multiple production applications. One of the applications consists of resources from Amazon EC2, AWS Lambda, Amazon RDS, Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS) across multiple AWS Regions. All company resources are tagged with a tag name of application and a value that corresponds to each application. A solutions architect must provide the quickest solution for identifying all of the tagged components.

Which solution meets these requirements?

A. Use AWS CloudTrail to generate a list of resources with the application tag.
B. Use the AWS CLI to query each service across all Regions to report the tagged components.
C. Run a query in Amazon CloudWatch Logs Insights to report on the components with the application tag.
D. Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag.

Correct Answer

D. Run a query with the AWS Resource Groups Tag Editor to report on the resources globally with the application tag.

Explanation

The quickest solution for identifying all of the tagged components across multiple AWS Regions is to use the AWS Resource Groups Tag Editor. The Tag Editor allows you to search and filter resources based on their tags globally, making it an efficient method to identify resources with a specific tag, such as the “application” tag in this case.

Option A, using AWS CloudTrail, is not the most suitable solution for this requirement. While CloudTrail can provide detailed audit logs of API activity, it would require additional processing and analysis to generate a list of resources with the application tag.

Option B, using the AWS CLI to query each service across all Regions, is time-consuming and less efficient as it would involve making separate API calls to each service and iterating through all the Regions.

Option C, running a query in Amazon CloudWatch Logs Insights, is primarily focused on analyzing and querying log data rather than providing a comprehensive view of tagged resources across multiple services and Regions.

Therefore, option D is the best solution for quickly identifying all the tagged components across multiple AWS Regions by utilizing the AWS Resource Groups Tag Editor’s global search and filter capabilities.

Question 999

Exam Question

A company serves content to its subscribers across the world using an application running on AWS. The application has several Amazon C2 instances in a private subnet behind an Application Load Balancer (ALB). Due to a recent change in copyright restrictions, the chief information officer (CIO) wants to block access for certain countries.

Which action will meet these requirements?

A. Modify the ALB security group to deny incoming traffic from blocked countries.
B. Modify the security group for EC2 instances to deny incoming traffic from blocked countries.
C. Use Amazon CloudFront to serve the application and deny access to blocked countries.
D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.

Correct Answer

D. Use ALB listener rules to return access denied responses to incoming traffic from blocked countries.

Explanation

To block access for certain countries, the most appropriate action is to use ALB listener rules to return access denied responses to incoming traffic from those blocked countries. ALB listener rules allow you to define conditions and actions based on various criteria, including the country of the request origin.

By configuring listener rules on the ALB, you can evaluate the country of the incoming traffic and, if it matches the blocked countries, return an access denied response. This effectively blocks access to the application for users originating from those countries.

Option A, modifying the ALB security group to deny incoming traffic from blocked countries, is not the best choice because security groups operate at the network level, and they do not have native support for country-based filtering.

Option B, modifying the security group for EC2 instances to deny incoming traffic from blocked countries, also does not directly address the requirement as the traffic goes through the ALB and not directly to the EC2 instances.

Option C suggests using Amazon CloudFront to serve the application and deny access to blocked countries. While CloudFront does have capabilities for geolocation-based blocking, it may not be the most appropriate solution if the application is already running behind an ALB. In this case, leveraging ALB listener rules is a more direct and efficient approach.

Therefore, option D is the recommended action to meet the requirement of blocking access for certain countries by using ALB listener rules to return access denied responses to incoming traffic from those blocked countries.

Question 1000

Exam Question

A company is building a web-based application running on Amazon EC2 instances in multiple Availability Zones. The web application will provide access to a repository of text documents totaling about 900 TB in size. The company anticipates that the web application will experience periods of high demand. A solutions architect must ensure that the storage component for the text documents can scale to meet the demand of the application at all times. The company is concerned about the overall cost of the solution.

Which storage solution meets these requirements MOST cost-effectively?

A. Amazon Elastic Block Store (Amazon EBS)
B. Amazon Elastic File System (Amazon EFS)
C. Amazon Elasticsearch Service (Amazon ES)
D. Amazon S3

Correct Answer

D. Amazon S3

Explanation

In this scenario, considering the requirements of scalability, high demand, and cost-effectiveness, Amazon S3 is the most cost-effective storage solution for the text documents repository.

Amazon S3 (Simple Storage Service) is a highly scalable and durable object storage service that is designed to handle large amounts of data. It offers virtually unlimited storage capacity and can easily handle the 900 TB size of the text documents repository.

S3 is designed to handle high demand and can scale to meet the application’s needs. It is capable of handling concurrent requests and provides high availability and durability. S3 also offers high performance, allowing for fast retrieval and access to the stored documents.

In terms of cost-effectiveness, S3 offers tiered pricing options, including Standard, Intelligent-Tiering, and Glacier Deep Archive, allowing you to choose the most suitable storage class based on the access patterns and cost requirements of the application. This flexibility enables you to optimize costs based on the usage patterns of the text documents.

Option A, Amazon EBS (Elastic Block Store), is not the most cost-effective choice for this scenario as it provides block-level storage designed for use with EC2 instances and may not be suitable for the scale and size of the text documents repository.

Option B, Amazon EFS (Elastic File System), is a managed file storage service that can provide scalability and concurrent access, but it may not be as cost-effective as Amazon S3 for large-scale data storage.

Option C, Amazon Elasticsearch Service (Amazon ES), is a search and analytics service and may not be the most suitable option for storing and serving large amounts of text documents.

Therefore, option D, Amazon S3, is the most cost-effective storage solution for the text documents repository considering scalability, high demand, and cost-effectiveness.

The post AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 28 appeared first on PUPUWEB - Information Resource for Emerging Technology Trends and Cybersecurity.



This post first appeared on PUPUWEB - Information Resource For Emerging Technology Trends And Cybersecurity, please read the originial post: here

Share the post

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 28

×

Subscribe to Pupuweb - Information Resource For Emerging Technology Trends And Cybersecurity

Get updates delivered right to your inbox!

Thank you for your subscription

×