AWS Certified DevOps Engineer - Professional - QnA - Part 2

An ecommerce company has chosen AWS to host its new platform.
The company's DevOps team has started building an AWS Control Tower landing zone.
The DevOps team has set the identity store within AWS Single Sign-On (AWS SSO) to external identity provider (IdP) and has configured SAML 2 0.
The DevOps team wants a robust permission model that applies the principle of least privilege.
The model must allow the team to build and manage only the team's own resources.
Which combination of steps will meet these requirements? (Choose three.)

  1. πŸš€ Option A: Create IAM policies that include the required permissions. Include the aws PrincipalTag condition key.
  2. πŸš€ Option B: Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.
  3. πŸš€ Option C: Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in AWS SSO.
  4. πŸš€ Option D: Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies.
  5. πŸš€ Option E: Enable attributes for access control in AWS SSO. Apply tags to users. Map the tags as key-value pairs.
  6. πŸš€ Option F: Enable attributes for access control in AWS SSO. Map attributes from the IdP as key-value pairs.

Q1

The Correct Answers are:

  • βœ… Option B. Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.

  • βœ… Option C. Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in AWS SSO.

  • βœ… Option F. Enable attributes for access control in AWS SSO. Map attributes from the IdP as key-value pairs.

  • βœ… Here's why:

  1. βœ… Scoped Permissions: Permission sets with inline policies and the aws:PrincipalTagΒ condition key enable specific access control, ensuring users can manage only their resources. This aligns with the least privilege principle by limiting access to what is necessary for the tasks.

  2. βœ… Group-based Access Control: Creating a group in the IdP and assigning it to accounts and permission sets in AWS SSO simplifies user management and access provisioning. This method leverages existing organizational structures and streamlines the process of granting access.

  3. βœ… Attribute-based Access Control (ABAC): Enabling attributes for access control and mapping IdP attributes to AWS SSO allows for dynamic and flexible permission assignments based on user attributes. This approach tailors access rights more closely to the user's role and function within the organization.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: While IAM policies are fundamental to AWS security, this option does not integrate directly with AWS SSO's model for managing permissions across a distributed environment, nor does it utilize the SSO features for external IdP integration.

  • ❌ Option D: Assigning groups to OUs and IAM policies is not directly supported by AWS SSO's permission sets framework, which is designed to simplify access management across AWS accounts and services.

  • ❌ Option E: Applying tags to users within AWS SSO and mapping them as key-value pairs is not mentioned as a supported feature for access control in AWS SSO documentation. The focus is on mapping attributes from the IdP for access control.

A1

An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders.
The order processing system consists of an AWS Lambda function using reserved concurrency.
The Lambda function processes order messages from an Amazon SQS queue and inserts processed orders into an Amazon DynamoDB table.
The DynamoDB table has Auto Scaling enabled for read and write capacity.
Which actions will diagnose and resolve the delay? (Choose two.)

  1. πŸš€ Option A: Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.
  2. πŸš€ Option B: Check the ApproximateAgeOfOldestMessage metric for the SQS queue and configure a redrive policy on the SQS queue.
  3. πŸš€ Option C: Check the NumberOfMessagesSent metric for the SQS queue and increase the SQS queue visibility timeout.
  4. πŸš€ Option D: Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table's Auto Scaling policy.
  5. πŸš€ Option E: Check the Throttles metric for the Lambda function and increase the Lambda function timeout.

Q2

The Correct Answers are:

  • βœ… Option A: Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.

  • βœ… Option D: Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table's Auto Scaling policy.

  • βœ… Here's why:

  1. βœ… Addressing SQS Queue Delays: The ApproximateAgeOfOldestMessage metric indicates how long messages have been waiting in the queue. A high age suggests that the Lambda function may not be processing messages quickly enough. Increasing the Lambda function's concurrency limit allows more instances of the function to run simultaneously, processing messages faster and reducing delays.

  2. βœ… Mitigating DynamoDB Throttling: ThrottledWriteRequests on DynamoDB indicate that write operations exceed the provisioned write capacity, causing delays in data insertion. Increasing the maximum write capacity units for the Auto Scaling policy helps to accommodate higher write loads, reducing write latencies and improving the overall performance of the order processing system.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: Configuring a redrive policy for the SQS queue addresses the issue of repeatedly failing messages but does not directly address the processing delay of orders. It's more about message handling and error management rather than improving processing speed.

  • ❌ Option C: Increasing the SQS queue visibility timeout would not directly address the delay issue unless the delay is caused by messages being processed multiple times due to short visibility timeouts. This option does not tackle the root causes identified in options A and D, which are processing capability and database write throughput.

  • ❌ Option E: Increasing the Lambda function timeout may help if the function is timing out before processing is complete, but it does not address issues related to processing capacity or database write throughput. Delays due to insufficient concurrency or database throttling are better addressed by options A and D.

A2

A company has a single AWS account that runs hundreds of Amazon EC2 instances in a single AWS Region.
New EC2 instances are launched and terminated each hour in the account.
The account also includes existing EC2 instances that have been running for longer than a week.
The company's security policy requires all running EC2 instances to use an EC2 instance profile.
If an EC2 instance does not have an instance profile attached, the EC2 instance must use a default instance profile that has no IAM permissions assigned.
A DevOps engineer reviews the account and discovers EC2 instances that are running without an instance profile.
During the review, the DevOps engineer also observes that new EC2 instances are being launched without an instance profile.
Which solution will ensure that an instance profile is attached to all existing and future EC2 instances in the Region?

  1. πŸš€ Option A: Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to EC2 RunInstances API calls. Configure the rule to invoke an AWS Lambda function to attach the default instance profile to the EC2 instances.
  2. πŸš€ Option B: Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.
  3. πŸš€ Option C: Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to EC2 Startlnstances API calls. Configure the rule to invoke an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.
  4. πŸš€ Option D: Configure the iam-role-managed-policy-check AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Lambda function to attach the default instance profile to the EC2 instances.

Q3

The Correct Answer is:

  • βœ… Option B: Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.

  • βœ… Here's why:

  1. βœ… Comprehensive Coverage: AWS Config continuously monitors and records AWS resource configurations, allowing it to detect all EC2 instances without an instance profile, regardless of when they were launched. This ensures both existing and new instances are covered.

  2. βœ… Automatic Remediation: The automatic remediation action via AWS Systems Manager Automation allows for the attachment of the default instance profile to non-compliant EC2 instances without manual intervention, ensuring compliance with the company's security policy.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: This solution only addresses instances at the time of their launch and does not cover existing instances already running without an instance profile.

  • ❌ Option C: Similar to Option A, this solution is reactive to the start events of EC2 instances and may not effectively ensure that all existing instances comply with the security policy.

  • ❌ Option D: While this option involves checking compliance, the iam-role-managed-policy-check rule is focused on IAM roles and managed policies, rather than directly ensuring that EC2 instances have an instance profile attached.

A3

A DevOps engineer is building a continuous deployment pipeline for a serverless application that uses AWS Lambda functions.
The company wants to reduce the customer impact of an unsuccessful deployment.
The company also wants to monitor for issues.
Which deploy stage configuration will meet these requirements?

  1. πŸš€ Option A: Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.
  2. πŸš€ Option B: Use AWS CloudFormation to publish a new stack update, and include Amazon CloudWatch alarms on all resources. Set up an AWS CodePipeline approval action for a developer to verify and approve the AWS CloudFormation change set.
  3. πŸš€ Option C: Use AWS CloudFormation to publish a new version on every stack update, and include Amazon CloudWatch alarms on all resources. Use the RoutingConfig property of the AWS::Lambda::Alias resource to update the traffic routing during the stack update.
  4. πŸš€ Option D: Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions. Publish a new version of the functions, and include Amazon CloudWatch alarms. Update the production alias to point to the new version. Configure rollbacks to occur when an alarm is in the ALARM state.

Q4

The Correct Answer is:

  • βœ… Option A: Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.

  • βœ… Here's why:

  1. βœ… Reduced Customer Impact: The Canary10Percent15Minutes deployment preference type gradually shifts 10% of traffic to the new version of the Lambda function over 15 minutes. This approach minimizes the impact on customers by limiting exposure to potential issues in the new deployment.

  2. βœ… Monitoring and Issue Detection: Amazon CloudWatch alarms monitor the health of the functions during and after deployment. If any issues are detected, such as increased error rates or response times, the alarms can trigger automatic rollbacks, further reducing customer impact.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: While including CloudWatch alarms provides monitoring, this option lacks the gradual traffic shifting mechanism provided by AWS CodeDeploy's deployment preferences, potentially exposing all users to issues immediately after deployment.

  • ❌ Option C: Publishing a new version and updating traffic routing via the AWS::Lambda::Alias resource does allow for traffic management, but it lacks the controlled, automated rollout and rollback mechanisms that the Canary deployment strategy provides.

  • ❌ Option D: Adding sample event payloads and publishing new versions for testing are good practices, but updating the production alias immediately to the new version without a gradual traffic shifting strategy could expose all users to potential issues at once, contrary to the requirement to reduce customer impact.

A4

To run an application, a DevOps Engineer launches an Amazon EC2 instances with public IP addresses in a public subnet.
A user data script obtains the application artifacts and installs them on the instances upon launch.
A change to the security classification of the application now requires the instances to run with no access to the Internet.
While the instances launch successfully and show as healthy, the application does not seem to be installed.
Which of the following should successfully install the application while complying with the new rule?

  1. πŸš€ Option A: Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.
  2. πŸš€ Option B: Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.
  3. πŸš€ Option C: Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.
  4. πŸš€ Option D: Create a security group for the application instances and whitelist only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.

Q5

The Correct Answer is:

  • βœ… Option C: Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.

  • βœ… Here's why:

  1. βœ… Compliance with No Internet Access: Creating a VPC endpoint for S3 enables EC2 instances in a private subnet to access the S3 bucket without requiring internet access. This complies with the security classification requiring no internet access for the instances.

  2. βœ… Secure Artifact Retrieval: Assigning an IAM instance profile to the EC2 instances provides the necessary permissions to securely access and download the application artifacts from the S3 bucket without exposing the instances to the public internet.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Launching instances in a public subnet with Elastic IP addresses attached conflicts with the requirement for no internet access. Disassociating the Elastic IPs afterwards does not address the initial installation issue without internet access.

  • ❌ Option B: Setting up a NAT gateway for instances in a private subnet provides internet access indirectly, which may violate the strict no internet access requirement of the application's security classification.

  • ❌ Option D: Creating a security group to whitelist outbound traffic only during installation partially addresses the security requirement. However, it still requires temporary internet access, which could be against the security policies.

A5

A development team is using AWS CodeCommit to version control application code and AWS CodePipeline to orchestrate software deployments.
The team has decided to use a remote master branch as the trigger for the pipeline to integrate code changes.
A developer has pushed code changes to the CodeCommit repository, but noticed that the pipeline had no reaction, even after 10 minutes.
Which of the following actions should be taken to troubleshoot this issue?

  1. πŸš€ Option A: Check that an Amazon CloudWatch Events rule has been created for the master branch to trigger the pipeline.
  2. πŸš€ Option B: Check that the CodePipeline service role has permission to access the CodeCommit repository.
  3. πŸš€ Option C: Check that the developer's IAM role has permission to push to the CodeCommit repository.
  4. πŸš€ Option D: Check to see if the pipeline failed to start because of CodeCommit errors in Amazon CloudWatch Logs.

Q6

The Correct Answer is:

  • βœ… Option A: Check that an Amazon CloudWatch Events rule has been created for the master branch to trigger the pipeline.

  • βœ… Here's why:

  1. βœ… Trigger Configuration: For AWS CodePipeline to automatically start a new pipeline execution upon code changes in the CodeCommit repository, there must be a trigger in place. Amazon CloudWatch Events rules are used to detect changes in the repository and trigger the pipeline. If such a rule is not set up or not correctly configured for the master branch, the pipeline won't start automatically upon a push to the branch.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: While it's important for the CodePipeline service role to have permission to access the CodeCommit repository, this permission impacts the ability of the pipeline to access repository content during execution rather than triggering the pipeline start.

  • ❌ Option C: The developer's IAM role permission to push to the CodeCommit repository affects the ability to push code changes but does not impact the triggering of the pipeline. The issue described indicates that the push was successful, but the pipeline did not start.

  • ❌ Option D: Checking for CodeCommit errors in Amazon CloudWatch Logs can help identify issues with the code push itself or other errors but does not directly address the lack of automatic pipeline triggering. The primary cause in this scenario is likely the absence or misconfiguration of a trigger mechanism.

A6

A company's developers use Amazon EC2 instances as remote workstations.
The company is concerned that users can create or modify EC2 security groups to allow unrestricted inbound access.
A DevOps engineer needs to develop a solution to detect when users create unrestricted security group rules.
The solution must detect changes to security group rules in near real time, remove unrestricted rules, and send email notifications to the security team.
The DevOps engineer has created an AWS Lambda function that checks for security group ID from input, removes rules that grant unrestricted access, and sends notifications through Amazon Simple Notification Service (Amazon SNS).
What should the DevOps engineer do next to meet the requirements?

  1. πŸš€ Option A: Configure the Lambda function to be invoked by the SNS topic. Create an AWS CloudTrail subscription for the SNS topic. Configure a subscription filter for security group modification events.
  2. πŸš€ Option B: Create an Amazon EventBridge scheduled rule to invoke the Lambda function. Define a schedule pattern that runs the Lambda function every hour.
  3. πŸš€ Option C: Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule’s event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function.
  4. πŸš€ Option D: Create an Amazon EventBridge custom event bus that subscribes to events from all AWS services. Configure the Lambda function to be invoked by the custom event bus.

Q7

The Correct Answer is:

  • βœ… Option C: Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule’s event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function.

  • βœ… Here's why:

  1. βœ… Real-time Detection: Amazon EventBridge allows for real-time detection of AWS service events. By creating an event rule that specifically looks for EC2 security group creation and modification events, the solution can immediately trigger the Lambda function to evaluate the security group for unrestricted access rules.

  2. βœ… Direct Invocation: Configuring the rule to directly invoke the Lambda function ensures that any changes to security groups are processed as soon as they occur, enabling the immediate removal of unrestricted rules and notification to the security team.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: While this option involves using SNS and CloudTrail, it's more complex and not as direct or efficient for real-time detection and response to security group modifications as using EventBridge.

  • ❌ Option B: Scheduling the Lambda function to run every hour may not meet the requirement for near real-time detection and response. Security group modifications could go unchecked for up to an hour.

  • ❌ Option D: Creating a custom event bus for all AWS service events is unnecessary for this specific use case and could introduce additional complexity without providing benefits over using the default event bus.

A7

A DevOps engineer is creating an AWS CloudFormation template to deploy a web service.
The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB).
The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses.
What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the web service?

  1. πŸš€ Option A: Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.
  2. πŸš€ Option B: Assign each EC2 instance an IPv6 Elastic IP address. Create a target group and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB.
  3. πŸš€ Option C: Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.
  4. πŸš€ Option D: Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443, and specify the dualstack IP address type on the ALB. Create a target group and add the EC2 instances as targets. Associate the target group with the ALB.

Q8

The Correct Answer is:

  • βœ… Option D: Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443, and specify the dualstack IP address type on the ALB. Create a target group and add the EC2 instances as targets. Associate the target group with the ALB.

  • βœ… Here's why:

  1. βœ… IPv6 Compatibility: Adding an IPv6 CIDR block to the VPC and the subnets where the ALB is located enables the network infrastructure to handle IPv6 traffic, which is essential for servicing clients with IPv6 addresses.

  2. βœ… Dualstack IP Address Type: Configuring the ALB to use the dualstack IP address type allows it to accept both IPv4 and IPv6 connections. This ensures that the web service is accessible to clients regardless of their IP addressing scheme.

  3. βœ… Port 443 and Security: Creating a listener on port 443 (HTTPS) ensures secure communication. Associating the target group with this listener allows the ALB to route traffic to the backend EC2 instances securely.

  4. βœ… Direct Target Group Association: By directly adding EC2 instances to the target group and associating it with the ALB, the solution ensures that traffic reaching the ALB is correctly routed to the instances in the private subnet, facilitating access to the web service without compromising on the new IPv6 requirement.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: While adding an IPv6 CIDR block is a step in the right direction, this option does not address the need for the ALB to handle IPv6 traffic specifically.

  • ❌ Option B: Assigning an IPv6 Elastic IP address to each EC2 instance is unnecessary for this scenario and does not address the core requirement of enabling IPv6 client access through the ALB.

  • ❌ Option C: Replacing the ALB with an NLB and assigning it an IPv6 Elastic IP address is not required for the given scenario. The ALB can support IPv6 traffic through dualstack configuration, which is more aligned with the needs of a web service.

A8

A company uses AWS Organizations and AWS Control Tower to manage all the company's AWS accounts.
The company uses the Enterprise Support plan.
A DevOps engineer is using Account Factory for Terraform (AFT) to provision new accounts.
When new accounts are provisioned, the DevOps engineer notices that the support plan for the new accounts is set to the Basic Support plan.
The DevOps engineer needs to implement a solution to provision the new accounts with the Enterprise Support plan.
Which solution will meet these requirements?

  1. πŸš€ Option A: Use an AWS Config conformance pack to deploy the account-part-of-organizations AWS Config rule and to automatically remediate any noncompliant accounts.
  2. πŸš€ Option B: Create an AWS Lambda function to create a ticket for AWS Support to add the account to the Enterprise Support plan. Grant the Lambda function the support:ResolveCase permission.
  3. πŸš€ Option C: Add an additional value to the control_tower_parameters input to set the AWSEnterpriseSupport parameter as the organization's management account number.
  4. πŸš€ Option D: Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.

Q9

The Correct Answer is:

  • βœ… Option D: Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.

  • βœ… Here's why:

  1. βœ… Direct Configuration: Setting the aft_feature_enterprise_supportΒ feature flag to True directly in the AFT configuration ensures that new accounts are automatically provisioned with the Enterprise Support plan. This approach leverages AFT's built-in capabilities for managing account configurations and support plans, making it a streamlined and efficient solution.

  2. βœ… Automation and Scalability: By modifying the AFT deployment input configuration and redeploying, the DevOps engineer can automate the process of provisioning new accounts with the desired support plan. This method is scalable and does not require manual intervention for each new account, aligning well with the needs of organizations managing multiple AWS accounts.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: While AWS Config conformance packs are useful for compliance and governance, they do not directly influence the support plan of newly created AWS accounts.

  • ❌ Option B: Creating a Lambda function to manually create a ticket for AWS Support introduces unnecessary complexity and manual steps into the process, which is less efficient than leveraging AFT's built-in features.

  • ❌ Option C: Adding an additional value to the control_tower_parametersΒ input to set the AWSEnterpriseSupport parameter does not directly apply to AFT's mechanism for setting the support plan for new accounts. This option might be confusing or irrelevant to the specific functionality provided by AFT.

A9

A company's DevOps engineer uses AWS Systems Manager to perform maintenance tasks during maintenance windows.
The company has a few Amazon EC2 instances that require a restart after notifications from AWS Health.
The DevOps engineer needs to implement an automated solution to remediate these notifications.
The DevOps engineer creates an Amazon EventBridge rule.
How should the DevOps engineer configure the EventBridge rule to meet these requirements?

  1. πŸš€ Option A: Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance.
  2. πŸš€ Option B: Configure an event source of Systems Manager and an event type that indicates a maintenance window. Target a Systems Manager document to restart the EC2 instance.
  3. πŸš€ Option C: Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.
  4. πŸš€ Option D: Configure an event source of EC2 and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.

Q10

The Correct Answer is:

  • βœ… Option A: Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance.

  • βœ… Here's why:

  1. βœ… Direct Response to AWS Health Notifications: AWS Health provides notifications about AWS services and resources, including maintenance events for EC2 instances. By targeting these specific events, the solution ensures that only relevant maintenance notifications trigger the remediation action.

  2. βœ… Utilization of Systems Manager for Automation: AWS Systems Manager allows for the automation of maintenance and management tasks on AWS resources. Targeting a Systems Manager document to restart the EC2 instance provides a straightforward and efficient method for remediating the issue identified by AWS Health.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: This option targets Systems Manager as the event source, focusing on maintenance windows rather than responding to specific AWS Health notifications related to instance maintenance.

  • ❌ Option CΒ and Option D: While creating a Lambda function to handle the event introduces flexibility, it adds unnecessary complexity for tasks that can be directly managed through Systems Manager. Additionally, there's no direct event source of EC2 for instance maintenance in the context of AWS Health notifications.

A10

Thanks

for

Watching