Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 32

The latest AWS Certified Solutions Architect – Associate SAA-C03 certification actual real practice exam question and answer (Q&A) dumps are available free, which are helpful for you to pass the AWS Certified Solutions Architect – Associate SAA-C03 exam and earn AWS Certified Solutions Architect – Associate SAA-C03 certification.

Question 1031

Exam Question

A company is running a highly sensitive Application on Amazon EC2 backed by an Amazon RDS database. Compliance regulations mandate that all personally identifiable information (PII) be encrypted at rest.

Which solution should a solutions architect recommend to meet this requirement with the LEAST amount of changes to the infrastructure?

A. Deploy AWS Certificate Manager to generate certificates. Use the certificates to encrypt the database volume.
B. Deploy AWS CloudHSM, generate encryption keys, and use the customer master key (CMK) to encrypt database volumes.
C. Configure SSL encryption using AWS Key Management Service customer master keys (AWS KMS CMKs) to encrypt database volumes.
D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

Correct Answer

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

Explanation

To meet the requirement of encrypting personally identifiable information (PII) at rest in the existing infrastructure with the least amount of changes, the recommended solution would be:

D. Configure Amazon Elastic Block Store (Amazon EBS) encryption and Amazon RDS encryption with AWS Key Management Service (AWS KMS) keys to encrypt instance and database volumes.

Option D suggests configuring both Amazon EBS encryption and Amazon RDS encryption with AWS KMS keys. This approach allows for encryption of both the EC2 instance volumes and the RDS database volumes without requiring significant changes to the infrastructure.

By enabling Amazon EBS encryption, the data at rest on the EC2 instance volumes will be automatically encrypted. This can be achieved by selecting the encryption option when creating or modifying the EBS volumes associated with the EC2 instance.

Additionally, enabling Amazon RDS encryption ensures that the data at rest in the RDS database is also encrypted. RDS encryption uses AWS KMS to manage the encryption keys. By selecting the appropriate encryption option in the RDS console or API, the RDS database volumes will be encrypted using AWS KMS keys.

By combining these two encryption options, both the EC2 instance volumes and the RDS database volumes can be encrypted without significant modifications to the existing infrastructure.

Options A, B, and C are not the most suitable choices for this scenario:

  • Option A suggests using AWS Certificate Manager to generate certificates and encrypt the database volume. However, AWS Certificate Manager is primarily used for managing SSL/TLS certificates and does not directly provide encryption for data at rest.
  • Option B suggests deploying AWS CloudHSM and using a customer master key (CMK) to encrypt the database volumes. While AWS CloudHSM provides secure key storage and cryptographic operations, it introduces additional complexity and changes to the infrastructure compared to the other options.
  • Option C suggests configuring SSL encryption using AWS Key Management Service customer master keys (AWS KMS CMKs) to encrypt the database volumes. While SSL encryption is important for securing data in transit, it does not provide encryption for data at rest.

Therefore, option D is the recommended solution as it provides encryption for both the EC2 instance volumes and the RDS database volumes with minimal changes to the existing infrastructure.

Question 1032

Exam Question

A company order fulfillment service uses a MySQL database. The database needs to support a large number of concurrent queries and transactions. Developers are spending time patching and tuning the database This is causing delays in releasing new product features. The company wants to use cloud-based services to help address this new challenge. The solution must allow the developers to migrate the database with little or no code changes and must optimize performance.

Which service should a solutions architect use to meet these requirements?

A. Amazon Aurora
B. Amazon DynamoDB
C. Amazon ElastiCache
D. MySQL on Amazon EC2

Correct Answer

A. Amazon Aurora

Explanation

To address the challenge of supporting a large number of concurrent queries and transactions, optimizing performance, and reducing the time spent on patching and tuning the database, the recommended service would be:

A. Amazon Aurora

Amazon Aurora is a fully managed relational database service compatible with MySQL and PostgreSQL. It is designed to deliver high performance, scalability, and availability while minimizing the need for manual maintenance tasks. With Amazon Aurora, developers can migrate their existing MySQL database with little or no code changes.

Amazon Aurora provides several benefits that align with the requirements mentioned:

  1. High performance: Amazon Aurora is optimized for performance, offering up to five times the throughput of standard MySQL databases. It uses a distributed storage architecture and a purpose-built database engine to achieve high performance and low latency.
  2. Scalability: Amazon Aurora can scale both compute and storage independently to handle increasing workloads. It automatically scales the database storage based on demand and allows read replicas to offload read traffic and improve scalability.
  3. Availability: Amazon Aurora provides built-in high availability through its Multi-AZ deployment option. It automatically replicates data across multiple Availability Zones, ensuring continuous operation in case of infrastructure failures.
  4. Managed service: Amazon Aurora is a fully managed service, which means AWS takes care of patching, backups, and database maintenance tasks. This allows developers to focus on their application development rather than database management.
  5. Compatibility with MySQL: Amazon Aurora is compatible with MySQL, meaning most applications that use MySQL can be migrated to Amazon Aurora with minimal code changes.

In contrast, the other options are not as suitable for the mentioned requirements:

  • Option B: Amazon DynamoDB is a NoSQL database service and may require significant changes to the existing MySQL-based application and data model.
  • Option C: Amazon ElastiCache is an in-memory caching service and not a relational database. While it can improve read performance, it does not address the requirements for a large number of concurrent queries and transactions.
  • Option D: Running MySQL on Amazon EC2 would still require manual management and maintenance of the database, including patching and tuning, which the company wants to minimize.

Therefore, based on the requirements of supporting concurrent queries and transactions, optimizing performance, and minimizing code changes, Amazon Aurora is the most suitable choice.

Question 1033

Exam Question

An ecommerce company has noticed performance degradation of its Amazon RDS based web application. The performance degradation is attributed to an increase in the number of read-only SQL queries triggered by business analysts. A solution architect needs to solve the problem with minimal changes to the existing web application.

What should the solution architect recommend?

A. Export the data to Amazon DynamoDB and have the business analysts run their queries.
B. Load the data into Amazon ElastiCache and have the business analysts run their queries.
C. Create a read replica of the primary database and have the business analysts run their queries.
D. Copy the data into an Amazon Redshift cluster and have the business analysts run their queries.

Correct Answer

C. Create a read replica of the primary database and have the business analysts run their queries.

Explanation

To address the performance degradation caused by an increase in read-only SQL queries triggered by business analysts, the solution architect should recommend:

C. Create a read replica of the primary database and have the business analysts run their queries.

Creating a read replica of the primary database is a suitable solution to offload read traffic and improve the performance of the web application without making significant changes to the existing application.

By creating a read replica, the read-only SQL queries can be directed to the replica, reducing the load on the primary database and improving overall performance. The read replica stays in sync with the primary database through asynchronous replication, ensuring that the data is up to date for the business analysts’ queries.

Benefits of using a read replica in this scenario include:

  • Improved performance: Offloading read traffic to the read replica allows the primary database to focus on serving write operations, enhancing the overall performance of the web application.
  • Minimal changes to the application: Creating a read replica does not require significant modifications to the existing web application. The application can continue using the same connection details, and the read replica can be accessed in a similar way as the primary database.
  • Data consistency: The read replica remains synchronized with the primary database, ensuring that the business analysts have access to the most up-to-date data for their queries.
  • Scalability: The read replica can also help handle increased read traffic and scale the application’s read capacity. Multiple read replicas can be created to distribute the load further if needed.

In contrast, the other options are not as suitable for the mentioned requirements:

  • Option A: Exporting the data to Amazon DynamoDB would require significant changes to the existing web application’s data model and query patterns.
  • Option B: Loading the data into Amazon ElastiCache, which is an in-memory caching service, may not be suitable for running complex SQL queries.
  • Option D: Copying the data into an Amazon Redshift cluster would require transforming the data into a different data warehouse format and modifying the queries to be compatible with Redshift.

Therefore, creating a read replica of the primary database is the recommended solution as it addresses the performance degradation caused by read-only SQL queries while minimizing changes to the existing web application.

Question 1034

Exam Question

A company is running a multi-tier web application on AWS. The application runs its database tier on Amazon Aurora MySQL. The application and database tiers are in the us-east-1 Region. A database administrator who regularly monitors the Aurora DB cluster finds that an intermittent increase in read traffic is creating high CPUutilization on the read replica and causing increased read latency of the application.

What should a solutions architect do to improve read scalability?

A. Reboot the Aurora DB cluster.
B. Create a cross-Region read replica
C. Increase the instance class of the read replica.
D. Configure Aurora Auto Scaling for the read replica.

Correct Answer

D. Configure Aurora Auto Scaling for the read replica.

Explanation

To improve read scalability and address the high CPU utilization and increased read latency on the read replica of an Amazon Aurora MySQL DB cluster, a solutions architect should recommend:

D. Configure Aurora Auto Scaling for the read replica.

Configuring Aurora Auto Scaling for the read replica is the appropriate solution to improve read scalability and handle the intermittent increase in read traffic. Aurora Auto Scaling automatically adjusts the capacity of the read replica based on the actual workload, allowing it to handle higher read demands during peak periods.

By configuring Aurora Auto Scaling, the read replica can dynamically scale its capacity to accommodate the increased read traffic, ensuring optimal performance and reducing read latency. It automatically adds or removes Aurora Replicas based on the configured scaling policies, such as CPU utilization or connections, to match the demand.

Benefits of using Aurora Auto Scaling for the read replica include:

  • Scalability: Aurora Auto Scaling allows the read replica to automatically scale up or down based on the workload, providing the necessary read capacity during high traffic periods and efficiently utilizing resources during low traffic periods.
  • Performance optimization: By adjusting the capacity to match the workload, Aurora Auto Scaling helps maintain optimal performance by preventing high CPU utilization and reducing read latency.
  • Cost optimization: With auto scaling, resources are allocated based on demand, minimizing costs during periods of lower traffic and eliminating the need for manual scaling.

In contrast, the other options are not suitable for improving read scalability:

  • Option A: Rebooting the Aurora DB cluster is unlikely to address the scalability issue or provide a long-term solution.
  • Option B: Creating a cross-Region read replica would improve read availability and disaster recovery but may not directly address the issue of read scalability and high CPU utilization.
  • Option C: Increasing the instance class of the read replica alone may provide temporary relief but may not be effective in handling intermittent and fluctuating read traffic over the long term.

Therefore, configuring Aurora Auto Scaling for the read replica is the recommended solution as it allows the read replica to dynamically adjust its capacity to handle the intermittent increase in read traffic and improve read scalability while optimizing performance and cost efficiency.

Question 1035

Exam Question

A company has two applications it wants to migrate to AWS. Both applications process a large set of files by accessing the same files at the same time. Both applications need to read the files with low latency.

Which architecture should a solutions architect recommend for this situation?

A. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an instance store volume to store the data.
B. Configure two AWS Lambda functions to run the applications. Create an Amazon EC2 instance with an Amazon Elastic Block Store (Amazon EBS) volume to store the data.
C. Configure one memory optimized Amazon EC2 instance to run both applications simultaneously. Create an Amazon Elastic Block Store (Amazon EBS) volume with Provisioned IOPS to store the data.
D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data.

Correct Answer

D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data.

Explanation

For the given scenario where two applications need to process a large set of files simultaneously with low latency, the recommended architecture is:

D. Configure two Amazon EC2 instances to run both applications. Configure Amazon Elastic File System (Amazon EFS) with General Purpose performance mode and Bursting Throughput mode to store the data.

Amazon Elastic File System (Amazon EFS) is a fully managed, highly available, and scalable file storage service provided by AWS. It is designed to provide shared file storage for multiple Amazon EC2 instances. In this scenario, using Amazon EFS is the most suitable option to achieve low-latency file access and enable both applications to access the same files simultaneously.

Key benefits of using Amazon EFS in this scenario include:

  • Shared file system: Amazon EFS allows multiple Amazon EC2 instances to access the same files concurrently. This enables both applications to process the large set of files at the same time without conflicts or data duplication.
  • Low latency: Amazon EFS provides low-latency file access, allowing the applications to read the files with minimal delay. This is particularly important for processing large sets of files efficiently.
  • Scalability: Amazon EFS is highly scalable and can automatically grow or shrink its capacity as the file system grows or shrinks. This ensures that the applications can handle increasing data volumes without performance degradation.
  • High availability: Amazon EFS is designed to provide high availability, with data automatically replicated across multiple Availability Zones. This ensures the files are accessible even in the event of a failure in one Availability Zone.

On the other hand, the other options are not as suitable for this scenario:

  • Option A: Using AWS Lambda functions with an EC2 instance and instance store volume may not provide the required low latency for accessing the files simultaneously.
  • Option B: Using AWS Lambda functions with an EC2 instance and Amazon Elastic Block Store (EBS) volume may also have limitations in terms of low latency file access for both applications.
  • Option C: Running both applications on a single EC2 instance with an EBS volume may introduce resource contention and potential performance issues when accessing the files simultaneously.

Therefore, the recommended architecture is to configure two Amazon EC2 instances to run both applications and use Amazon EFS to store the files. This ensures low-latency file access and allows both applications to process the large set of files concurrently.

Question 1036

Exam Question

A company has a custom application with embedded credentials that retrieves information from an Amazon RDS MySQL DB instance. Management says the application must be made more secure with the least amount of programming effort.

What should a solutions architect do to meet these requirements?

A. Use AWS Key Management Service (AWS KMS) customer master keys (CMKs) to create keys. Configure the application to load the database credentials from AWS KMS. Enable automatic key rotation.
B. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret Manager.
C. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Secrets Manager.
D. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Systems Manager Parameter Store. Configure the application to load the database credentials from Parameter Store. Set up a credentials rotation schedule for the application user in the RDS for MySQL database using Parameter Store.

Correct Answer

B. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secret Manager.

Explanation

To make the custom application more secure with the least amount of programming effort, a solutions architect should:

B. Create credentials on the RDS for MySQL database for the application user and store the credentials in AWS Secrets Manager. Configure the application to load the database credentials from Secrets Manager. Create an AWS Lambda function that rotates the credentials in Secrets Manager.

AWS Secrets Manager is a service provided by AWS that helps protect sensitive information such as database credentials, API keys, and other secrets. It provides a secure and centralized way to store and manage secrets, with built-in integration and rotation capabilities.

In this scenario, storing the database credentials in AWS Secrets Manager and configuring the application to load the credentials from Secrets Manager offers several benefits:

  • Enhanced security: By storing the credentials in Secrets Manager, they are encrypted and protected using AWS Key Management Service (AWS KMS). This helps ensure that the credentials are securely stored and transmitted.
  • Least amount of programming effort: Instead of embedding the credentials directly in the application, the application can be configured to retrieve the credentials dynamically from Secrets Manager. This eliminates the need to hardcode the credentials within the application code, reducing the risk of exposure and making it easier to manage and update the credentials.
  • Credential rotation: AWS Secrets Manager provides a built-in feature for automatic credential rotation. By creating an AWS Lambda function and configuring rotation for the credentials stored in Secrets Manager, the application’s credentials can be automatically rotated on a schedule. This helps improve security by regularly changing the credentials and reducing the risk of unauthorized access.

The other options mentioned are not the most suitable choices for achieving the desired outcome:

  • Option A: While AWS KMS can be used to encrypt the credentials, it does not provide the same level of integration and automation as AWS Secrets Manager for securely managing and rotating credentials.
  • Option C: Secrets Manager should be used for storing and managing credentials, rather than AWS Systems Manager Parameter Store, which is better suited for configuration data.
  • Option D: Although storing the credentials in AWS Systems Manager Parameter Store is a viable option, it lacks the built-in rotation capabilities provided by AWS Secrets Manager.

Therefore, the recommended approach is to store the database credentials in AWS Secrets Manager, configure the application to retrieve the credentials from Secrets Manager, and use an AWS Lambda function to rotate the credentials automatically.

Question 1037

Exam Question

A company’s application hosted on Amazon EC2 instances needs to access an Amazon S3 bucket. Due to data sensitivity, traffic cannot traverse the internet.

How should a solutions architect configure access?

A. Create a private hosted zone using Amazon Route 53.
B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.
C. Configure AWS Private Link between the EC2 instance and the S3 bucket.
D. Set up a site-to-site VPN connection between the VPC and the S3 bucket.

Correct Answer

B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.

Explanation

To allow the Amazon EC2 instances to access an Amazon S3 bucket without traffic traversing the internet, a solutions architect should:

B. Configure a VPC gateway endpoint for Amazon S3 in the VPC.

A VPC (Virtual Private Cloud) gateway endpoint for Amazon S3 allows private connectivity between a VPC and S3. It enables traffic from EC2 instances in the VPC to access S3 using private IP addresses, without the need to traverse the internet.

By configuring a VPC gateway endpoint for Amazon S3, the traffic between the EC2 instances and the S3 bucket remains within the AWS network and does not go over the internet. This ensures data sensitivity and enhances security.

The other options mentioned are not the most suitable choices for achieving the desired outcome:

  • Option A: Creating a private hosted zone using Amazon Route 53 is used for configuring private DNS resolution within a VPC and does not directly address the requirement of accessing S3 without internet traffic.
  • Option C: AWS PrivateLink is a service that enables private connectivity between VPCs and supported AWS services. While it can be used to establish private connectivity with S3, it requires specific support from the service and is not available for all AWS services, including S3.
  • Option D: Setting up a site-to-site VPN connection between the VPC and the S3 bucket is not a valid configuration. VPN connections are used for establishing secure connections between on-premises networks and AWS, rather than for connecting VPC resources with AWS services like S3.

Therefore, the recommended approach is to configure a VPC gateway endpoint for Amazon S3 in the VPC to allow the EC2 instances to access the S3 bucket without internet traffic.

Question 1038

Exam Question

A company is preparing to launch a public-facing web application in the AWS Cloud. The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer (ELB). A third party service is used for the DNS. The company solutions architect must recommend a solution to detect and protect against large scale DDoS attacks.

Which solution meets these requirements?

A. Enable Amazon GuardDuty on the account.
B. Enable Amazon Inspector on the EC2 instances.
C. Enable AWS Shield and assign Amazon Route 53 to it.
D. Enable AWS Shield Advanced and assign the ELB to it.

Correct Answer

D. Enable AWS Shield Advanced and assign the ELB to it.

Explanation

To detect and protect against large-scale DDoS attacks in the given scenario, the recommended solution is:

D. Enable AWS Shield Advanced and assign the ELB to it.

AWS Shield is a managed Distributed Denial of Service (DDoS) protection service provided by AWS. It offers two tiers of protection: AWS Shield Standard and AWS Shield Advanced.

  • AWS Shield Standard: This service is automatically included at no additional cost with all AWS accounts. It provides basic DDoS protection for AWS resources, including Amazon EC2 instances behind Elastic Load Balancers (ELBs). It helps protect against common and most frequently observed DDoS attacks.
  • AWS Shield Advanced: This is a higher level of protection that offers more advanced DDoS mitigation capabilities. It includes additional features like enhanced DDoS protection, real-time attack visibility and reporting, and 24/7 DDoS response team (DRT) support.

In the given scenario, where the architecture consists of EC2 instances within a VPC behind an ELB, enabling AWS Shield Advanced and assigning the ELB to it is the most suitable solution to detect and protect against large-scale DDoS attacks. AWS Shield Advanced provides comprehensive protection and mitigation techniques, including real-time monitoring, automatic attack detection, and response capabilities.

The other options mentioned are not the most appropriate choices for achieving the desired outcome:

  • Option A: Enabling Amazon GuardDuty is a threat detection service that focuses on identifying unauthorized and malicious activity within an AWS account. While it is an important security service, it is not specifically designed for large-scale DDoS attack detection and protection.
  • Option B: Enabling Amazon Inspector is a vulnerability assessment service that helps identify security issues and vulnerabilities in EC2 instances. While it is useful for assessing and improving the security posture of the application, it is not directly related to DDoS attack detection and protection.
  • Option C: While Amazon Route 53 is a scalable and highly available DNS service, it does not provide direct DDoS protection. AWS Shield is the recommended service for DDoS protection, and AWS Shield Advanced is the appropriate tier for comprehensive protection against large-scale DDoS attacks.

Therefore, the recommended solution in this scenario is to enable AWS Shield Advanced and assign the ELB to it to detect and protect against large-scale DDoS attacks.

Question 1039

Exam Question

A company hosts an application on an Amazon EC2 instance that requires a maximum of 200 GB storage space. The application is used infrequently, with peaks during mornings and evenings. Disk I/O varies, but peaks at 3,000 IOPS. The chief financial officer of the company is concerned about costs and has asked a solutions architect to recommend the most cost-effective storage option that does not sacrifice performance.

Which solution should the solutions architect recommend?

A. Amazon EBS Cold HDD (sc1)
B. Amazon EBS General Purpose SSD (gp2)
C. Amazon EBS Provisioned IOPS SSD (io1)
D. Amazon EBS Throughput Optimized HDD (st1)

Correct Answer

C. Amazon EBS Provisioned IOPS SSD (io1)

Explanation

Based on the given requirements, the most cost-effective storage option that does not sacrifice performance would be:

C. Amazon EBS Provisioned IOPS SSD (io1)

  • Amazon EBS Cold HDD (sc1): This storage option provides the lowest cost per GB among Amazon EBS volume types. However, it is optimized for throughput rather than IOPS, and the performance may not meet the requirement of up to 3,000 IOPS.
  • Amazon EBS General Purpose SSD (gp2): This storage option provides a balance of price and performance. It offers burstable performance up to 16,000 IOPS, which should be sufficient for the given workload. However, it may not be the most cost-effective option for sustained high I/O levels.
  • Amazon EBS Provisioned IOPS SSD (io1): This storage option allows you to provision a specific number of IOPS for your workload. It provides consistent and predictable performance, which is important for the application’s requirement of up to 3,000 IOPS. By provisioning the necessary IOPS, you can ensure that the application’s performance needs are met while keeping costs optimized.
  • Amazon EBS Throughput Optimized HDD (st1): This storage option is designed for large, sequential workloads with high throughput requirements. While it offers cost-effective storage, it may not provide the necessary IOPS performance required by the application.

Considering the requirement of up to 3,000 IOPS and the need for a cost-effective solution, Amazon EBS Provisioned IOPS SSD (io1) is the most suitable option. It allows you to provision the required number of IOPS and provides consistent performance for the application, ensuring both performance and cost optimization.

Question 1040

Exam Question

A company has no existing file share services. A new project requires access to file storage that is mountable as a drive for on-premises desktops. The file server must authenticate users to an Active Directory domain before they are able to access the storage.

Which service will allow Active Directory users to mount storage as a drive on their desktops?

A. Amazon S3 Glacier
B. AWS Data Sync
C. AWS Snowball Edge
D. AWS Storage Gateway

Correct Answer

D. AWS Storage Gateway

Explanation

The service that allows Active Directory users to mount storage as a drive on their desktops is:

D. AWS Storage Gateway

AWS Storage Gateway provides a hybrid cloud storage solution that enables on-premises applications to seamlessly and securely access cloud storage. The File Gateway configuration of AWS Storage Gateway supports file-based access to Amazon S3 objects, allowing you to mount Amazon S3 buckets as a file share on your on-premises desktops.

With AWS Storage Gateway File Gateway, you can integrate your existing Active Directory environment for user authentication. This enables users to authenticate to the Active Directory domain before accessing the file share, ensuring secure access to the storage.

Amazon S3 Glacier is an archival storage service that is not designed for direct file access or mounting as a drive.

AWS Data Sync is a service for securely transferring large amounts of data between on-premises storage and AWS services but does not provide direct file access or mounting as a drive.

AWS Snowball Edge is a physical data transfer device that combines storage and compute capabilities. While it can be used for offline data transfer, it does not directly provide file access or mounting as a drive for on-premises desktops.

Therefore, the correct choice for this scenario is AWS Storage Gateway (File Gateway configuration).

The post AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 32 appeared first on PUPUWEB - Information Resource for Emerging Technology Trends and Cybersecurity.



This post first appeared on PUPUWEB - Information Resource For Emerging Technology Trends And Cybersecurity, please read the originial post: here

Share the post

AWS Certified Solutions Architect – Associate SAA-C03 Exam Questions and Answers – Page 32

×

Subscribe to Pupuweb - Information Resource For Emerging Technology Trends And Cybersecurity

Get updates delivered right to your inbox!

Thank you for your subscription

×