Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 65

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1361

Exam Question

A company has an application that posts messages to Amazon SQS. Another application polls the queue and processes the messages in an I/O-intensive operation. The company has a service level agreement (SLA) that specifies the maximum amount of time that can elapse between receiving the messages and responding to the users. Due to an increase in the number of messages the company has difficulty meeting its SLA consistently.

What should a solutions architect do to help improve the application’s processing time and ensure it can handle the load at any level?

A. Create an Amazon Machine Image (AMI) from the Instance used for processing. Terminate the instance and replace it with a larger size.
B. Create an Amazon Machine Image (AMI) from the instance used for processing. Terminate the instance and replace it with an Amazon EC2 Dedicated Instance.
C. Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep us aggregate CPU utilization below 70%.
D. Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy based on the age of the oldest message in the SQS queue.

Correct Answer

C. Create an Amazon Machine image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep us aggregate CPU utilization below 70%.

Explanation

To improve the application’s processing time and ensure it can handle the load while meeting the SLA consistently, a solutions architect should recommend the following steps:

1. Create an Amazon Machine Image (AMI) from the instance used for processing: This allows for easy replication of the existing instance configuration.

2. Create an Auto Scaling group using the AMI in its launch configuration: The Auto Scaling group will manage the number of instances dynamically based on the workload.

3. Configure the Auto Scaling group with a target tracking policy to keep the aggregate CPU utilization below a certain threshold: This ensures that the instances scale up or down based on CPU utilization, effectively handling load fluctuations.

Therefore, the recommended approach is option C: Create an Amazon Machine Image (AMI) from the instance used for processing. Create an Auto Scaling group using this image in its launch configuration. Configure the group with a target tracking policy to keep the aggregate CPU utilization below 70%.

This approach allows for automatic scaling based on CPU utilization, ensuring that the application can handle increased load and meet the SLA consistently. It provides flexibility in managing the instances and ensures efficient resource allocation while optimizing performance.

Question 1362

Exam Question

A company is designing a new service that will run on Amazon EC2 instances behind an Elastic Load Balancer. However, many of the web service clients can only reach IP addresses whitelisted on their firewalls.

What should a solution architect recommend to meet the clients’ needs?

A. A Network Load Balancer with an associated Elastic IP address.
B. An Application Load Balancer with an associated Elastic IP address.
C. An A record in an Amazon Route 53 hosted zone pointing to an Elastic IP address.
D. An EC2 instance with a public IP address running as a proxy in front of the load balancer.

Correct Answer

D. An EC2 instance with a public IP address running as a proxy in front of the load balancer.

Explanation

To meet the needs of clients with whitelisted IP address requirements, a solution architect should recommend the following approach:

D. An EC2 instance with a public IP address running as a proxy in front of the load balancer.

By using an EC2 instance as a proxy, the clients can whitelist the public IP address of the proxy instance on their firewalls. The proxy instance will receive incoming requests from the clients, and then forward them to the Elastic Load Balancer, which distributes the traffic to the backend EC2 instances.

This solution ensures that the clients can reach the service by allowing access to the whitelisted proxy IP address. The proxy instance acts as an intermediary, enabling connectivity between the clients and the load balancer.

Question 1363

Exam Question

A company operates an ecommerce website on Amazon EC2 instances behind an Application Load Balancer (ALB) in an Auto Scaling group. The site is experiencing performance issues related to a high request rate from illegitimate external systems with changing IP addresses. The security team is worried about potential DDoS attacks against the website. The company must block the illegitimate incoming requests in a way that has a minimal impact on legitimate users.

What should a solutions architect recommend?

A. Deploy Amazon Inspector and associate it with the ALB.
B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.
C. Deploy rules to the network ACLs associated with the ALB to block the incoming traffic.
D. Deploy Amazon GuardDuty and enable rate-limiting protection when configuring GuardDuty.

Correct Answer

B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.

Explanation

To address the performance issues caused by high request rates from illegitimate external systems and protect against potential DDoS attacks, a solutions architect should recommend:

B. Deploy AWS WAF, associate it with the ALB, and configure a rate-limiting rule.

AWS WAF (Web Application Firewall) is designed to protect web applications from common web exploits and provides a way to filter and monitor HTTP and HTTPS requests. By associating AWS WAF with the Application Load Balancer (ALB), you can set up rules to filter and block incoming requests based on specific criteria, such as IP addresses or request rates.

In this case, configuring a rate-limiting rule in AWS WAF allows you to restrict the number of requests from individual IP addresses, effectively mitigating the impact of the high request rate from illegitimate sources. Legitimate users will not be affected as long as they do not exceed the defined threshold.

By using AWS WAF to filter and block illegitimate incoming requests, you can protect your website from DDoS attacks while minimizing the impact on legitimate users.

Question 1364

Exam Question

A company wants to host a web application on AWS that will communicate to a database within a VPC. The application should be highly available.

What should a solutions architect recommend?

A. Create two Amazon EC2 instances to host the web servers behind a load balancer, and then deploy the database on a large instance.
B. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.
C. Deploy a load balancer in the public subnet with an Auto Scaling group for the web servers, and then deploy the database on an Amazon EC2 instance in the private subnet.
D. Deploy two web servers with an Auto Scaling group, configure a domain that points to the two web servers, and then deploy a database architecture in multiple Availability Zones.

Correct Answer

B. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.

Explanation

To host a highly available web application on AWS that communicates with a database within a VPC, a solutions architect should recommend:

B. Deploy a load balancer in multiple Availability Zones with an Auto Scaling group for the web servers, and then deploy Amazon RDS in multiple Availability Zones.

By deploying a load balancer in multiple Availability Zones, you ensure that your web application can distribute traffic and handle failures across different zones, providing high availability. By associating an Auto Scaling group with the load balancer, you can automatically scale the number of web servers based on demand.

In addition, deploying the database on Amazon RDS in multiple Availability Zones ensures that your database is replicated and available across different zones. This provides fault tolerance and data durability.

Combining these components, you can achieve high availability for both the web application and the database, ensuring that your application remains accessible and resilient even in the event of failures in specific zones or instances.

Question 1365

Exam Question

A solutions architect is creating an application that will handle batch processing of large amounts of data. The input data will be held in Amazon S3 and the output data will be stored in a different S3 bucket. For processing, the application will transfer the data over the network between multiple Amazon EC2 instances.

What should the solutions architect do to reduce the overall data transfer costs?

A. Place all the EC2 instances in an Auto Scaling group.
B. Place all the EC2 instances in the same AWS Region.
C. Place all the EC2 instances in the same Availability Zone.
D. Place all the EC2 instances in private subnets in multiple Availability Zones.

Correct Answer

D. Place all the EC2 instances in private subnets in multiple Availability Zones.

Explanation

To reduce the overall data transfer costs in the given scenario, the solutions architect should:

D. Place all the EC2 instances in private subnets in multiple Availability Zones.

By placing the EC2 instances in private subnets within multiple Availability Zones, you can take advantage of the AWS regional data transfer pricing. Data transfer between EC2 instances within the same Availability Zone is typically free, while data transfer between Availability Zones within the same region incurs a lower cost compared to data transfer across different regions.

By keeping the EC2 instances within private subnets, you can establish private network connectivity between them without incurring additional data transfer costs. This enables efficient and cost-effective data transfer for your batch processing application.

Placing the EC2 instances in an Auto Scaling group, as mentioned in option A, is a good practice for scalability and availability, but it does not directly address reducing data transfer costs.

Placing the EC2 instances in the same AWS Region, as mentioned in option B, is already assumed in the context of the scenario, as the data is stored in Amazon S3 and the EC2 instances will process the data within the same AWS Region.

Placing the EC2 instances in the same Availability Zone, as mentioned in option C, does not necessarily reduce data transfer costs, as data transfer within the same Availability Zone is generally free. It is the data transfer between Availability Zones that incurs costs.

Therefore, option D is the most appropriate choice for reducing data transfer costs in this scenario.

Question 1366

Exam Question

A company’s packaged application dynamically creates and returns single-use text files in response to user requests. The company is using Amazon CloudFront for distribution, but wants to future reduce data transfer costs. The company cannot modify the application’s source code.

What should a solution architect do to reduce costs?

A. Use Lambda@Edge to compress the files as they are sent to users.
B. Enable Amazon S3 Transfer Acceleration to reduce the response times.
C. Enable caching on the CloudFront distribution to store generated files at the edge.
D. Use Amazon S3 multipart uploads to move the files to Amazon S3 before returning them to users.

Correct Answer

C. Enable caching on the CloudFront distribution to store generated files at the edge.

Explanation

To reduce data transfer costs for the dynamically created single-use text files served through Amazon CloudFront without modifying the application’s source code, a solution architect should recommend:

C. Enable caching on the CloudFront distribution to store generated files at the edge.

Enabling caching on the CloudFront distribution allows the generated files to be stored at the edge locations closest to the users. This means that subsequent requests for the same files from different users or the same user will be served directly from the edge locations, reducing the data transfer costs. The files will be cached for a configurable time period, reducing the need to fetch them from the origin server repeatedly.

Using Lambda@Edge, as mentioned in option A, can be helpful for modifying the content or behavior of the files at the edge locations, but it does not directly reduce data transfer costs.

Enabling Amazon S3 Transfer Acceleration, as mentioned in option B, improves the transfer speeds for large files over long distances but does not specifically address reducing data transfer costs.

Using Amazon S3 multipart uploads and moving the files to Amazon S3 before returning them to users, as mentioned in option D, may introduce additional complexity and costs, as it involves additional storage and data transfer between CloudFront and S3.

Therefore, the most appropriate option to reduce data transfer costs in this scenario is to enable caching on the CloudFront distribution.

Question 1367

Exam Question

A-company receives structured and semi-structured data from various sources once every day. A solutions architect needs to design a solution that leverages big data processing frameworks. The data should be accessible using SQL queries and business intelligence tools.

What should the solutions architect recommend to build the MOST high-performing solution?

A. Use AWS Glue to process data and Amazon S3 to store data.
B. Use Amazon EMR to process data and Amazon Redshift to store data.
C. Use Amazon EC2 to process data and Amazon Elastic Block Store (Amazon EBS) to store data.
D. Use Amazon Kinesis Data Analytics to process data and Amazon Elastic File System (Amazon EFS) to store data.

Correct Answer

B. Use Amazon EMR to process data and Amazon Redshift to store data.

Explanation

To build the most high-performing solution for processing structured and semi-structured data and making it accessible for SQL queries and business intelligence tools, the solutions architect should recommend:

B. Use Amazon EMR to process data and Amazon Redshift to store data.

Amazon EMR (Elastic MapReduce) is a big data processing framework that allows for efficient processing of large datasets using popular distributed processing frameworks like Apache Spark, Apache Hadoop, and others. It provides scalable, cost-effective, and high-performance processing capabilities.

Amazon Redshift, on the other hand, is a fully managed data warehousing service designed for online analytic processing (OLAP). It offers fast query performance for analytical workloads and supports SQL queries. With Redshift, you can store and analyze large volumes of structured data efficiently.

By combining Amazon EMR for data processing and Amazon Redshift for data storage, you can leverage the strengths of both services. EMR can handle the processing and transformation of structured and semi-structured data, while Redshift can store the processed data in a format optimized for analytics and provide fast query performance.

Options A, C, and D do not provide the same level of performance and suitability for SQL queries and business intelligence tools as the combination of Amazon EMR and Amazon Redshift. AWS Glue (option A) is primarily used for data cataloging, ETL (extract, transform, load) processes, and data discovery. Amazon EC2 and Amazon EBS (option C) are more suitable for general-purpose computing and storage, not specifically optimized for big data processing and analytics. Amazon Kinesis Data Analytics and Amazon EFS (option D) are more focused on real-time streaming data processing rather than batch processing and SQL-based analytics.

Therefore, option B is the most appropriate recommendation for building a high-performing solution for processing and analyzing structured and semi-structured data using big data processing frameworks.

Question 1368

Exam Question

A database is on an Amazon RDS MySQL 5.6 Multi-AZ DB instance that experiences highly dynamic reads. Application developers notice a significant slowdown when testing read performance from a secondary AWS Region. The developers want a solution that provides less than 1 second of read replication latency.

What should the solutions architect recommend?

A. Install MySQL on Amazon EC2 in the secondary Region.
B. Migrate the database to Amazon Aurora with cross-Region replicas.
C. Create another RDS for MySQL read replica in the secondary.
D. Implement Amazon ElastiCache to improve database query performance.

Correct Answer

B. Migrate the database to Amazon Aurora with cross-Region replicas.

Explanation

To achieve less than 1 second of read replication latency between the primary AWS Region and a secondary AWS Region for an Amazon RDS MySQL database, the solutions architect should recommend:

B. Migrate the database to Amazon Aurora with cross-Region replicas.

Amazon Aurora is a MySQL-compatible relational database engine provided by AWS. It offers high performance, scalability, and availability. By utilizing Aurora with cross-Region replicas, you can create read replicas of the primary database in the secondary AWS Region. The cross-Region replicas maintain close to real-time replication, resulting in low replication latency.

When a write operation occurs in the primary Region, the changes are propagated to the cross-Region replicas, ensuring data consistency across regions. With Aurora’s distributed storage architecture, the read replicas can handle read traffic efficiently, providing fast and responsive read performance.

Option A, installing MySQL on Amazon EC2 in the secondary Region, would require manual setup and management of the replication process, which may not provide the desired replication latency and would involve additional operational overhead.

Option C, creating another RDS for MySQL read replica in the secondary Region, does not guarantee the desired replication latency, as it depends on the replication mechanism of Amazon RDS. Cross-Region replication is not supported natively for RDS MySQL 5.6.

Option D, implementing Amazon ElastiCache, is a caching service that can improve database query performance by storing frequently accessed data in memory. However, it does not directly address the requirement for cross-Region replication with low latency.

Therefore, option B is the recommended approach to achieve less than 1 second of read replication latency between the primary AWS Region and a secondary AWS Region for an Amazon RDS MySQL database.

Question 1369

Exam Question

A company hosts its core network services, including directory services and DNS, in its own premise data center. The data center is connected to the AWS Cloud using AWS Direct Connect (DX). Additional AWS accounts are planned that will require quick, cost-effective, and consistent access to these network services.

What should a solutions architect implement to meet these requirements with the LEAST amount of operational overhead?

A. Create a DX connection in each new account. Route the network traffic to the on-premises servers.
B. Configure VPC endpoints in the DX VPC for all required services. Route the network traffic to the on-premises servers.
C. Create a VPN connection between each new account and the DX VPC. Route the network traffic to the on-premises servers.
D. Configure AWS Transit Gateway between the accounts. Assigns DX to the transit gateway and route network traffic to the on-premises servers.

Correct Answer

D. Configure AWS Transit Gateway between the accounts. Assigns DX to the transit gateway and route network traffic to the on-premises servers.

Explanation

To meet the requirements of quick, cost-effective, and consistent access to core network services hosted in the on-premises data center for additional AWS accounts with the least operational overhead, the recommended solution is:

D. Configure AWS Transit Gateway between the accounts. Assign DX to the transit gateway and route network traffic to the on-premises servers.

AWS Transit Gateway is a service that simplifies network connectivity between multiple Amazon Virtual Private Clouds (VPCs) and on-premises networks. By configuring AWS Transit Gateway between the accounts, you can establish a central hub that connects all the accounts together, including the on-premises data center connected via AWS Direct Connect (DX).

This approach offers the following benefits:

1. Simplified management: Instead of setting up and managing multiple DX connections or VPN connections in each new account, you can manage connectivity through a centralized AWS Transit Gateway. This reduces operational overhead and simplifies network administration.

2. Cost-effective: With AWS Transit Gateway, you only need to provision and manage a single DX connection to the transit gateway. This eliminates the need for multiple DX connections or VPN connections in each account, resulting in cost savings.

3. Quick and consistent access: Once the AWS Transit Gateway is set up and configured, all the connected accounts can have quick and consistent access to the core network services in the on-premises data center. The routing of network traffic to the on-premises servers can be efficiently managed through the transit gateway.

Option A, creating a DX connection in each new account, would lead to increased operational overhead as you would need to set up and manage multiple DX connections individually.

Option B, configuring VPC endpoints in the DX VPC, would only provide access to AWS services within the VPC and would not address connectivity to the on-premises network services.

Option C, creating a VPN connection between each new account and the DX VPC, would also require setting up and managing multiple VPN connections, which would increase operational overhead.

Therefore, option D is the recommended approach to provide quick, cost-effective, and consistent access to the core network services in the on-premises data center for additional AWS accounts with the least operational overhead.

Question 1370

Exam Question

A company is planning to deploy an Amazon RDS DB instance running Amazon Aurora. The company has a backup retention policy requirement of 90 days.

Which solution should a solutions architect recommend?

A. Set the backup retention period to 90 days when creating the RDS DB instance.
B. Configure RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days.
C. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily.
D. Use a daily scheduled event with Amazon CloudWatch Events to execute a custom AWS Lambda function that makes a copy of the RDS automated snapshot. Purge snapshots older than 90 days.

Correct Answer

C. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily.

Explanation

To meet the backup retention policy requirement of 90 days for an Amazon RDS DB instance running Amazon Aurora, the recommended solution is:

C. Create an AWS Backup plan to perform a daily snapshot of the RDS database with the retention set to 90 days. Create an AWS Backup job to schedule the execution of the backup plan daily.

AWS Backup is a fully managed backup service that centralizes and automates the backup of data across AWS services. By creating an AWS Backup plan, you can define the desired backup schedule, retention period, and backup vault settings.

In this case, you can create an AWS Backup plan with a daily snapshot of the RDS database and set the retention period to 90 days to meet the backup retention policy requirement. You can then schedule the execution of the backup plan using an AWS Backup job to ensure regular backups are taken.

Option A, setting the backup retention period to 90 days when creating the RDS DB instance, would only retain the automated snapshots for 90 days and may not provide the desired level of flexibility and control over backups.

Option B, configuring RDS to copy automated snapshots to a user-managed Amazon S3 bucket with a lifecycle policy set to delete after 90 days, would only handle the copying and deletion of snapshots but may not provide a comprehensive backup management solution.

Option D, using a custom AWS Lambda function triggered by a CloudWatch Events scheduled event, would require additional development effort and ongoing maintenance of the custom solution, which can be complex and less preferable compared to using the managed AWS Backup service.

Therefore, option C is the recommended approach to meet the backup retention policy requirement for the Amazon RDS DB instance running Amazon Aurora.

The post AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 65 appeared first on PUPUWEB - Information Resource for Emerging Technology Trends and Cybersecurity.



This post first appeared on PUPUWEB - Information Resource For Emerging Technology Trends And Cybersecurity, please read the originial post: here

Share the post

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 65

×

Subscribe to Pupuweb - Information Resource For Emerging Technology Trends And Cybersecurity

Get updates delivered right to your inbox!

Thank you for your subscription

×