Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 13

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 841

Exam Question

A company wants to move from many standalone AWS accounts to a consolidated, multi-account architecture. The company plans to create many new AWS accounts for different business units. The company needs to authenticate access to these AWS accounts by using a centralized corporate directory service.

Which combination of actions should a solutions architect recommend to meet these requirements? (Choose two.)

A. Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
B. Set up an Amazon Cognito identity pool. Configure AWS Single Sign-On to accept Amazon Cognito authentication.
C. Configure a service control policy (SCP) to manage the AWS accounts. Add AWS Single Sign-On to AWS Directory Service.
D. Create a new organization in AWS Organizations. Configure the organization’s authentication mechanism to use AWS Directory Service directly.
E. Set up AWS Single Sign-On (AWS SSO) in the organization. Configure AWS SSO, and integrate it with the company’s corporate directory service.

Correct Answer

A. Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
E. Set up AWS Single Sign-On (AWS SSO) in the organization. Configure AWS SSO, and integrate it with the company’s corporate directory service.

Explanation

SCPs affect only IAM users and roles that are managed by accounts that are part of the organization. SCPs don’t affect resource-based policies directly. They also don’t affect users or roles from accounts outside the organization. For example, consider an Amazon S3 bucket that’s owned by account A in an organization. The bucket policy (a resource-based policy) grants access to users from account B outside the organization. Account A has an SCP attached. That SCP doesn’t apply to those outside users in account B. The SCP applies only to users that are managed by account A in the organization.

An SCP restricts permissions for IAM users and roles in member accounts, including the member account’s root user. Any account has only those permissions permitted by every parent above it. If a permission is blocked at any level above the account, either implicitly (by not being included in an Allow policy statement) or explicitly (by being included in a Deny policy statement), a user or role in the affected account can’t use that permission, even if the account administrator attaches the AdministratorAccess IAM policy with */* permissions to the user.

To meet the requirements of using a centralized corporate directory service for authentication in a consolidated, multi-account architecture, a solutions architect should recommend the following actions:

A. Create a new organization in AWS Organizations with all features turned on. Create the new AWS accounts in the organization.
E. Set up AWS Single Sign-On (AWS SSO) in the organization. Configure AWS SSO and integrate it with the company’s corporate directory service.

Explanation:

A centralized multi-account architecture can be achieved by creating a new organization in AWS Organizations (Action A). AWS Organizations provides management and governance for multiple AWS accounts, enabling centralized control and policy enforcement. By creating the new AWS accounts within the organization, you can establish a hierarchical structure and manage them collectively.

Once the organization is set up, you should set up AWS Single Sign-On (AWS SSO) (Action E). AWS SSO is a service that simplifies user management and provides centralized authentication for multiple AWS accounts. By integrating AWS SSO with the company’s corporate directory service, such as Microsoft Active Directory, you can use existing corporate credentials to authenticate access to the AWS accounts.

Together, these actions allow you to create a consolidated multi-account architecture while using a centralized corporate directory service for authentication and access control.

Reference

  • Amazon Cognito
  • AWS > Documentation > AWS Organizations > User Guide > Service control policies (SCPs)

Question 842

Exam Question

A company operates a two-tier application for image processing. The application uses two Availability Zones, each with one public subnet and one private subnet. An Application Load Balancer (ALB) for the web tier uses the public subnets. Amazon EC2 instances for the application tier use the private subnets. Users report that the application is running more slowly than expected. A security audit of the web server log files shows that the application is receiving millions of illegitimate requests from a small number of IP addresses. A solutions architect needs to resolve the immediate performance problem while the company investigates a more permanent solution.

What should the solutions architect recommend to meet this requirement?

A. Modify the inbound security group for the web tier. Add a deny rule for the IP addresses that are consuming resources.
B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.
C. Modify the inbound security group for the application tier. Add a deny rule for the IP addresses that are consuming resources.
D. Modify the network ACL for the application tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.

Correct Answer

B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.

Explanation

To resolve the immediate performance problem caused by illegitimate requests while the company investigates a more permanent solution, the solutions architect should recommend the following:

B. Modify the network ACL for the web tier subnets. Add an inbound deny rule for the IP addresses that are consuming resources.

Explanation:

In this scenario, the application is receiving millions of illegitimate requests from a small number of IP addresses, which is impacting its performance. To mitigate this issue, it is recommended to modify the network ACL (Access Control List) for the web tier subnets (Option B).

Network ACLs act as a firewall at the subnet level and can be used to control inbound and outbound traffic. By adding an inbound deny rule to the network ACL for the web tier subnets, specifically for the IP addresses that are consuming resources, the company can block or restrict access from those IP addresses. This will help reduce the illegitimate requests and alleviate the immediate performance problem.

Modifying the security group for the web tier (Option A) or the security group for the application tier (Option C) will not be effective in this case because security groups are associated with EC2 instances and control traffic at the instance level, not the subnet level.

Similarly, modifying the network ACL for the application tier subnets (Option D) will not directly address the performance problem related to the web tier and the illegitimate requests.

Therefore, the recommended action is to modify the network ACL for the web tier subnets and add an inbound deny rule for the IP addresses causing the performance issue.

Question 843

Exam Question

A company is planning to run a group of Amazon EC2 instances that connect to an Amazon Aurora Database. The company has built an AWS Cloud Formation template to deploy the EC2 instances and the Aurora DB cluster. The company wants to allow the instances to authenticate to the database in a secure way. The company does not want to maintain static database credentials.

Which solution meets these requirements with the LEAST operational effort?

A. Create a database user with a username and password. Add parameters for the database user name and password to the CloudFormation template. Pass the parameters to the EC2 instances when the instances are launched.
B. Create a database user with a username and password. Store the username and password in AWS Systems Manager Parameter Store Configure the EC2 instances to retrieve the database credentials from Parameter Store.
C. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM authentication. Associate a role with the EC2 instances to allow applications on the instances to access the database.
D. Configure the DB cluster to use IAM database authentication with an IAM user. Create a database user that has a name that matches the IAM user. Associate the IAM user with the EC2 instances to allow applications on the instances to access the database.

Correct Answer

C. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM authentication. Associate a role with the EC2 instances to allow applications on the instances to access the database.

Explanation

To allow the EC2 instances to authenticate securely with the Aurora database while minimizing operational effort, the recommended solution is:

C. Configure the DB cluster to use IAM database authentication. Create a database user to use with IAM authentication. Associate a role with the EC2 instances to allow applications on the instances to access the database.

Explanation:

In this scenario, the company wants to enable secure authentication between the EC2 instances and the Aurora database without maintaining static database credentials or introducing additional operational overhead. IAM database authentication provides a solution for this requirement.

Option C suggests configuring the Aurora DB cluster to use IAM database authentication. With IAM database authentication, you can associate database users with IAM users or IAM roles. This allows you to leverage IAM for authentication, eliminating the need for managing database-specific credentials. You would create a database user to use with IAM authentication.

To enable the EC2 instances to access the database, you would associate an IAM role with the EC2 instances. This IAM role should have the necessary permissions to access the database. By doing so, the applications running on the EC2 instances can utilize the role’s permissions to access the Aurora database securely.

This solution minimizes operational effort as it avoids the management of static database credentials and doesn’t require storing credentials in the CloudFormation template (Option A). It also eliminates the need to store credentials in AWS Systems Manager Parameter Store and retrieve them from there (Option B).

Option D suggests using IAM database authentication with an IAM user, but it’s more recommended to use IAM roles instead of IAM users for EC2 instance access to align with security best practices.

Therefore, the recommended solution is to configure the Aurora DB cluster to use IAM database authentication, create a database user for IAM authentication, and associate an IAM role with the EC2 instances to allow secure access to the database.

CreationPolicy attribute

Finally, you need a way to instruct CloudFormation to complete stack creation only after all the services (such as Apache and MySQL) are running and not after all the stack resources are created. In other words, if you use the template from the earlier section to launch a stack, CloudFormation sets the status of the stack as CREATE_COMPLETE after it successfully creates all the resources. However, if one or more services failed to start, CloudFormation still sets the stack status as CREATE_COMPLETE. To prevent the status from changing to CREATE_COMPLETE until all the services have successfully started, you can add a CreationPolicy attribute attribute to the instance. This attribute puts the instance’s status in CREATE_IN_PROGRESS until CloudFormation receives the required number of success signals or the timeout period is exceeded, so you can control when the instance has been successfully created.

Reference

AWS > Documentation > AWS CloudFormation > User Guide > Deploying applications on Amazon EC2 with AWS CloudFormation

Question 844

Exam Question

A media company is using two video conversion tools that run on Amazon EC2 instances. One tool runs on Windows instances, and the other tool runs on Linux instances. Each video file is large in size and must be processed by both tools. The company needs a storage solution that can provide a centralized file system that can be mounted on all the EC2 instances that are used in this process.

Which solution meets these requirements?

A. Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon Elastic File System (Amazon EFS) with Max I/O performance mode for the Linux instances.
B. Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon FSx for Lustre for the Linux instances. Link both Amazon FSx file systems to the same Amazon S3 bucket.
C. Use Amazon Elastic File System (Amazon EFS) with General Purpose performance mode for the Windows instances and the Linux instances
D. Use Amazon FSx for Windows File Server for the Windows instances and the Linux instances.

Correct Answer

A. Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon Elastic File System (Amazon EFS) with Max I/O performance mode for the Linux instances.

Explanation

The solution that meets the requirements of providing a centralized file system that can be mounted on both the Windows and Linux instances is:

A. Use Amazon FSx for Windows File Server for the Windows instances. Use Amazon Elastic File System (Amazon EFS) with Max I/O performance mode for the Linux instances.

Explanation:

In this scenario, the media company requires a centralized file system that can be mounted on both Windows and Linux instances to facilitate the processing of large video files by two different video conversion tools.

Option A suggests using Amazon FSx for Windows File Server for the Windows instances and Amazon EFS with Max I/O performance mode for the Linux instances.

Amazon FSx for Windows File Server provides a fully managed, native Windows file system that is accessible over the SMB protocol. It is well-suited for Windows workloads and can be easily mounted on Windows instances.

Amazon EFS, on the other hand, is a scalable and fully managed file system that supports the Network File System (NFS) protocol. It can be mounted on Linux instances and offers compatibility with multiple Linux distributions.

By using Amazon FSx for Windows File Server for the Windows instances and Amazon EFS for the Linux instances, the media company can achieve a centralized file system that caters to both types of instances.

Option B, which suggests using Amazon FSx for Windows File Server for the Windows instances and Amazon FSx for Lustre for the Linux instances, is not necessary in this case as Lustre is primarily designed for high-performance computing workloads and may not be the most suitable choice for this scenario.

Option C, which suggests using Amazon EFS with General Purpose performance mode for both Windows and Linux instances, may not provide the required performance level for the large video files.

Option D, which suggests using Amazon FSx for Windows File Server for both Windows and Linux instances, is not feasible as FSx for Windows File Server is designed specifically for Windows workloads and cannot be mounted on Linux instances.

Therefore, the recommended solution is to use Amazon FSx for Windows File Server for the Windows instances and Amazon EFS with Max I/O performance mode for the Linux instances.

Question 845

Exam Question

A company wants to monitor its AWS costs for financial review. The cloud operations team is designing an architecture in the AWS Organizations management account to query AWS Cost and Usage Reports for all member accounts. The team must run this query once a month and provide a detailed analysis of the bill.

Which solution is the MOST scalable and cost-effective way to meet these requirements?

A. Enable Cost and Usage Reports in the management account. Deliver reports to Amazon Kinesis. Use Amazon EMR for analysis.
B. Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3. Use Amazon Athena for analysis.
C. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon S3. Use Amazon Redshift for analysis.
D. Enable Cost and Usage Reports for member accounts. Deliver the reports to Amazon Kinesis. Use Amazon QuickSight for analysis.

Correct Answer

B. Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3. Use Amazon Athena for analysis.

Explanation

The most scalable and cost-effective solution to meet the requirements of monitoring AWS costs for financial review is:

B. Enable Cost and Usage Reports in the management account. Deliver the reports to Amazon S3. Use Amazon Athena for analysis.

Option B suggests enabling Cost and Usage Reports in the management account and delivering the reports to Amazon S3. The reports can be analyzed using Amazon Athena.

Enabling Cost and Usage Reports in the management account allows you to consolidate the cost and usage data from all member accounts into a centralized location. By delivering the reports to Amazon S3, you have a durable and scalable storage solution for the cost data.

Amazon Athena is a serverless query service that allows you to analyze data directly from Amazon S3 using standard SQL. It is well-suited for ad-hoc and interactive analysis of large datasets. By leveraging Amazon Athena, you can query the cost and usage data stored in Amazon S3 without the need for additional infrastructure.

This solution is both scalable and cost-effective. Amazon S3 provides highly scalable storage for the reports, and Amazon Athena offers a serverless and pay-per-query model, eliminating the need for dedicated resources and reducing costs when compared to running and managing an Amazon EMR cluster or using Amazon Redshift.

Option A suggests using Amazon Kinesis and Amazon EMR for analysis. While this combination can be used for analyzing the Cost and Usage Reports, it introduces additional complexity and cost compared to using Amazon S3 and Amazon Athena directly.

Option C suggests using Amazon Redshift for analysis. While Amazon Redshift is a powerful data warehousing solution, it may not be the most cost-effective choice for analyzing cost and usage data. Amazon Redshift is better suited for scenarios requiring complex analytics and large-scale data warehousing.

Option D suggests using Amazon Kinesis and Amazon QuickSight for analysis. While Amazon QuickSight is a business intelligence tool that can visualize data, it may not be the most cost-effective option for analyzing cost and usage data. Amazon Kinesis, on the other hand, is typically used for real-time streaming data and may not be the best fit for batch analysis of cost and usage reports.

Therefore, the recommended solution is to enable Cost and Usage Reports in the management account, deliver the reports to Amazon S3, and use Amazon Athena for analysis.

If you are an administrator of an AWS Organizations management account and do not want any of the member accounts in your organization to set-up a CUR you can do one of the following:

  • (Recommended) if you’ve opted into Organizations with all features enabled, you can apply a Service Control Policy (SCP). Note that SCPs only apply to member accounts and if you want to restrict any IAM users associated with the management account from setting up a CUR, you’ll need to adjust their specific IAM permissions. SCPs also are not retroactive, so they will not de-activate any CURs a member account may have set-up prior to the SCP being applied.

Reference

AWS > Documentation > Cost and Usage Report > User Guide > What are AWS Cost and Usage Reports?

Question 846

Exam Question

A solutions architect must provide an automated solution for a company’s compliance policy that states security groups cannot include a rule that allows SSH from 0.0.0.0/0. The company needs to be notified if there is any breach in the policy. A solution is needed as soon as possible.

What should the solutions architect do to meet these requirements with the LEAST operational overhead?

A. Write an AWS Lambda script that monitors security groups for SSH being open to 0.0.0.0/0 addresses and creates a notification every time it finds one.
B. Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a noncompliant rule is created.
C. Create an IAM role with permissions to globally open security groups and network ACLs. Create an Amazon Simple Notification Service (Amazon SNS) topic to generate a notification every time the role is assumed by a user.
D. Configure a service control policy (SCP) that prevents non-administrative users from creating or editing security groups. Create a notification in the ticketing system when a user requests a rule that needs administrator permissions.

Correct Answer

B. Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a noncompliant rule is created.

Explanation

To meet the requirements with the least operational overhead, the solutions architect should:

B. Enable the restricted-ssh AWS Config managed rule and generate an Amazon Simple Notification Service (Amazon SNS) notification when a noncompliant rule is created.

Option B suggests enabling the restricted-ssh AWS Config managed rule. AWS Config is a service that allows you to assess, audit, and evaluate the configurations of your AWS resources. The restricted-ssh managed rule specifically checks for security groups with SSH open to 0.0.0.0/0, which violates the compliance policy.

By enabling this managed rule, AWS Config will automatically evaluate the security groups and generate compliance reports. When a noncompliant rule is detected, you can configure AWS Config to generate an Amazon SNS notification. This notification will alert you about the breach in the policy, allowing you to take immediate action.

This solution has the least operational overhead because AWS Config handles the continuous evaluation of security group configurations, eliminating the need to manually monitor them. It also integrates with Amazon SNS to provide instant notifications, ensuring timely awareness of any policy breaches.

Option A suggests writing an AWS Lambda script to monitor security groups and generate notifications. While this approach can be effective, it requires more manual effort to develop, deploy, and maintain the Lambda function. Enabling AWS Config’s managed rule is a more streamlined and automated solution.

Option C suggests creating an IAM role and using it to monitor security groups. However, this option involves granting global permissions to open security groups, which can introduce security risks. Additionally, it requires more manual setup and configuration compared to using AWS Config.

Option D suggests using a service control policy (SCP) to prevent non-administrative users from creating or editing security groups. While SCPs are useful for enforcing organizational policies, they do not provide the immediate notification capability required in this scenario.

Therefore, the recommended solution is to enable the restricted-ssh AWS Config managed rule and generate an Amazon SNS notification when a noncompliant rule is created.

Question 847

Exam Question

A company is using an Application Load Balancer (ALB) to present its application to the internet. The company finds abnormal traffic access patterns across the application. A solutions architect needs to improve visibility into the infrastructure to help the company understand these abnormalities better.

What is the MOST operationally efficient solution that meets these requirements?

A. Create a table in Amazon Athena for AWS CloudTrail logs. Create a query for the relevant information.
B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.
C. Enable ALB access logging to Amazon S3. Open each file in a text editor, and search each line for the relevant information.
D. Use Amazon EMR on a dedicated Amazon EC2 instance to directly query the ALB to acquire traffic access log information.

Correct Answer

B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena, and query the logs.

Explanation

The most operationally efficient solution to improve visibility into the infrastructure and understand abnormal traffic access patterns in this scenario is:

B. Enable ALB access logging to Amazon S3. Create a table in Amazon Athena and query the logs.

Option B suggests enabling ALB access logging to Amazon S3. When enabled, the ALB will generate access logs that contain detailed information about each request processed by the load balancer, including the client IP address, timestamps, response codes, and more. These logs are then stored in an S3 bucket.

By configuring a table in Amazon Athena, which is a serverless query service, you can easily query and analyze the ALB access logs using SQL-like queries. Amazon Athena can directly query data stored in S3, including the ALB access logs. It provides a flexible and efficient way to perform ad hoc analysis and gain insights into the abnormal traffic access patterns.

This solution is operationally efficient because it automates the logging process and makes the logs easily accessible for analysis. By leveraging Amazon Athena, you can query and analyze the logs without the need for manual parsing or searching through each file, as mentioned in options A and C. Furthermore, there is no need to set up and manage a separate EMR cluster, as suggested in option D.

Therefore, the recommended solution is to enable ALB access logging to Amazon S3, create a table in Amazon Athena, and query the logs to gain visibility and understand abnormal traffic access patterns efficiently.

Reference

AWS Big Data Blog > Catalog and analyze Application Load Balancer logs more efficiently with AWS Glue custom classifiers and Amazon Athena

Question 848

Exam Question

A security team needs to enforce the rotation of all IAM users’ access keys every 90 days. If an access key is found to be older, the key must be made inactive and removed. A solutions architect must create a solution that will check for and remediate any keys older than 90 days.

Which solution meets these requirements with the LEAST operational effort?

A. Create an AWS Config rule to check for the key age. Configure the AWS Config rule to run an AWS Batch job to remove the key.
B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to check for the key age. Configure the rule to run an AWS Batch job to remove the key.
C. Create an AWS Config rule to check for the key age. Define an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule an AWS Lambda function to remove the key.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule to check for the key age. Define an EventBridge (CloudWatch Events) rule to run an AWS Batch job to remove the key.

Correct Answer

C. Create an AWS Config rule to check for the key age. Define an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule an AWS Lambda function to remove the key.

Explanation

The solution that meets the requirements with the least operational effort is:

C. Create an AWS Config rule to check for the key age. Define an Amazon EventBridge (Amazon CloudWatch Events) rule to schedule an AWS Lambda function to remove the key.

Option C suggests creating an AWS Config rule to check the age of IAM users’ access keys. AWS Config provides a configuration and compliance service that enables you to assess, audit, and evaluate the configurations of your AWS resources. By creating a custom AWS Config rule, you can define the logic to check the age of access keys for IAM users.

To automate the remediation process, you can define an Amazon EventBridge (Amazon CloudWatch Events) rule. This rule can be configured to trigger an AWS Lambda function at a scheduled interval, such as every day. The Lambda function can be programmed to identify and remove access keys that are older than 90 days. The Lambda function will have the necessary permissions to make the access key inactive and remove it from the IAM user.

This solution minimizes operational effort as it combines the capabilities of AWS Config and Amazon EventBridge to automate the checking and remediation process. The AWS Config rule handles the checking of key age, and the EventBridge rule triggers the Lambda function for remediation. There is no need to set up and manage an additional service like AWS Batch, as mentioned in options A and B. Option D suggests using EventBridge to trigger a Batch job, which introduces unnecessary complexity.

Therefore, the recommended solution is to create an AWS Config rule to check for the key age and define an Amazon EventBridge rule to schedule an AWS Lambda function for removing the key, ensuring compliance with the 90-day rotation policy with minimal operational effort.

Reference

AWS Cloud Operations & Migrations Blog > Managing aged access keys through AWS Config remediations

Question 849

Exam Question

A company needs to build a reporting solution on AWS. The solution must support SQL queries that data analysts run on the data. The data analysts will run fewer than 10 total queries each day. The company generates 3 GB of new data daily in an on-premises relational database. This data needs to be transferred to AWS to perform reporting tasks.

What should a solutions architect recommend to meet these requirements at the LOWEST cost?

A. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database into Amazon S3. Use Amazon Athena to query the data.
B. Use an Amazon Kinesis Data Firehose delivery stream to deliver the data into an Amazon Elasticsearch Service (Amazon ES) cluster. Run the queries in Amazon ES.
C. Export a daily copy of the data from the on-premises database. Use an AWS Storage Gateway file gateway to store and copy the export into Amazon S3. Use an Amazon EMR cluster to query the data.
D. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database and load it into an Amazon Redshift cluster. Use the Amazon Redshift cluster to query the data.

Correct Answer

D. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database and load it into an Amazon Redshift cluster. Use the Amazon Redshift cluster to query the data.

Explanation

To build a reporting solution on AWS that supports SQL queries that data analysts run on the data and generates 3 GB of new data daily in an on-premises relational database, the solutions architect can use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database and load it into an Amazon Redshift cluster. The data analysts can then run SQL queries on the Amazon Redshift cluster.

Therefore, the solution that meets these requirements at the LOWEST cost is D. Use AWS Database Migration Service (AWS DMS) to replicate the data from the on-premises database and load it into an Amazon Redshift cluster. Use the Amazon Redshift cluster to query the data.

AWS DMS cannot migrate or replicate changes to a schema with a name that begins with underscore (_). If you have schemas that have a name that begins with an underscore, use mapping transformations to rename the schema on the target.

Amazon Redshift doesn’t support VARCHARs larger than 64 KB. LOBs from traditional databases can’t be stored in Amazon Redshift.

Applying a DELETE statement to a table with a multi-column primary key is not supported when any of the primary key column names use a reserved word. Go here to see a list of Amazon Redshift reserved words.

You may experience performance issues if your source system performs UPDATE operations on the primary key of a source table. These performance issues occur when applying changes to the target. This is because UPDATE (and DELETE) operations depend on the primary key value to identify the target row. If you update the primary key of a source table, your task log will contain messages like the following:

Update on table 1 changes PK to a PK that was previously updated in the same bulk update.

Reference

AWS > Documentation > AWS Database Migration Service > User Guide > Using an Amazon Redshift database as a target for AWS Database Migration Service

Question 850

Exam Question

A company runs its two-tier ecommerce website on AWS. The web tier consists of a load balancer that sends traffic to Amazon EC2 instances. The database tier uses an Amazon RDS DB instance. The EC2 instances and the RDS DB instance should not be exposed to the public internet. The EC2 instances require internet access to complete payment processing of orders through a third party web service. The application must be highly available.

Which combination of configuration options will meet these requirements? (Choose two.)

A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS MultiAZ DB instance in private subnets.
B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the private subnets.
C. Use an Auto Scaling group to launch the EC2 instances in public subnets across two Availability Zones. Deploy an RDS Multi-AZ DB instance in private subnets.
D. Configure a VPC with one public subnet, one private subnet, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnet.
E. Configure a VPC with two public subnets, two private subnets, and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the public subnets.

Correct Answer

A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS MultiAZ DB instance in private subnets.
B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the private subnets.

Explanation

The combination of configuration options that will meet the given requirements is:

A. Use an Auto Scaling group to launch the EC2 instances in private subnets. Deploy an RDS MultiAZ DB instance in private subnets.
B. Configure a VPC with two private subnets and two NAT gateways across two Availability Zones. Deploy an Application Load Balancer in the private subnets.

The requirement states that the EC2 instances and RDS DB instance should not be exposed to the public internet. To achieve this, we need to place them in private subnets.

Option A recommends using an Auto Scaling group to launch the EC2 instances in private subnets. This ensures that the EC2 instances are not directly accessible from the internet. Additionally, deploying an RDS Multi-AZ DB instance in private subnets provides high availability and eliminates the need for public internet access.

Option B suggests configuring a VPC with two private subnets and two NAT gateways across two Availability Zones. By placing the EC2 instances in private subnets and using NAT gateways, the instances can still access the internet for payment processing while maintaining security by not being directly exposed to the public internet. Deploying an Application Load Balancer in the private subnets allows external traffic to be load balanced to the EC2 instances without exposing them directly to the internet.

Option C suggests launching the EC2 instances in public subnets, which would expose them to the public internet. This contradicts the requirement of not exposing the EC2 instances to the public internet.

Option D suggests using a single public subnet for the VPC, which is not recommended for high availability. It’s preferable to have multiple subnets across different Availability Zones to ensure redundancy.

Option E suggests deploying an Application Load Balancer in public subnets, which would expose the load balancer and EC2 instances to the public internet. This contradicts the requirement of not exposing the EC2 instances to the public internet.

Therefore, options A and B provide the appropriate configuration to meet the requirements.

The post AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 13 appeared first on PUPUWEB - Information Resource for Emerging Technology Trends and Cybersecurity.



This post first appeared on PUPUWEB - Information Resource For Emerging Technology Trends And Cybersecurity, please read the originial post: here

Share the post

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 13

×

Subscribe to Pupuweb - Information Resource For Emerging Technology Trends And Cybersecurity

Get updates delivered right to your inbox!

Thank you for your subscription

×