AWS Certified DevOps Engineer - Professional - QnA - Part 2

An ecommerce company has chosen AWS to host its new platform.
The company's DevOps team has started building an AWS Control Tower landing zone.
The DevOps team has set the identity store within AWS Single Sign-On (AWS SSO) to external identity provider (IdP) and has configured SAML 2 0.
The DevOps team wants a robust permission model that applies the principle of least privilege.
The model must allow the team to build and manage only the team's own resources.
Which combination of steps will meet these requirements? (Choose three.)

  • A. Create IAM policies that include the required permissions. Include the aws PrincipalTag condition key.
  • B. Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.
  • C. Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in AWS SSO.
  • D. Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies.
  • E. Enable attributes for access control in AWS SSO. Apply tags to users. Map the tags as key-value pairs.
  • F. Enable attributes for access control in AWS SSO. Map attributes from the IdP as key-value pairs.

Q1

To establish a robust permission model that adheres to the principle of least privilege, allowing the DevOps team to effectively manage their resources within an AWS environment that uses AWS Control Tower, AWS Single Sign-On (AWS SSO) with an external identity provider (IdP) and SAML 2.0, the following steps are recommended:

  • B. Create permission sets. Attach an inline policy that includes the required permissions and uses the aws:PrincipalTag condition key to scope the permissions.

  • C. Create a group in the IdP. Place users in the group. Assign the group to accounts and the permission sets in AWS SSO.

  • F. Enable attributes for access control in AWS SSO. Map attributes from the IdP as key-value pairs.

  • ✅ Here's why:

  1. Scoped Permissions with Permission Sets: Creating permission sets in AWS SSO allows for the granular assignment of permissions tailored to the needs of the DevOps team. Utilizing the aws:PrincipalTag condition key in inline policies attached to these permission sets enables precise control over resource access based on tags, enforcing least privilege access.

  2. Group-Based Access Management: By creating groups within the external IdP and mapping these to specific accounts and permission sets in AWS SSO, the organization can streamline access management. This approach simplifies the process of assigning and revoking access as team membership changes, ensuring that only authorized users can build and manage resources.

  3. Attribute-Based Access Control (ABAC): Enabling attributes for access control in AWS SSO and mapping attributes from the IdP as key-value pairs allows for dynamic permission adjustments. This method leverages information from the IdP to make access decisions, enhancing security and flexibility in permission assignments based on user attributes.

  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Create IAM policies that include the required permissions. Include the aws:PrincipalTag condition key.: While this approach can provide scoped permissions, it doesn't leverage the centralized management and integration capabilities of AWS SSO with an external IdP, missing out on the benefits of streamlined access management across multiple AWS accounts.

  • D. Create a group in the IdP. Place users in the group. Assign the group to OUs and IAM policies.: Assigning groups from an IdP directly to organizational units (OUs) and IAM policies is not a direct feature of AWS SSO and lacks the granularity and flexibility offered by permission sets within AWS SSO.

  • E. Enable attributes for access control in AWS SSO. Apply tags to users. Map the tags as key-value pairs.: AWS SSO does not directly apply tags to users. Instead, it uses attributes from the IdP to control access, making this option infeasible for the desired access control model.

A1

An ecommerce company is receiving reports that its order history page is experiencing delays in reflecting the processing status of orders.
The order processing system consists of an AWS Lambda function using reserved concurrency.
The Lambda function processes order messages from an Amazon SQS queue and inserts processed orders into an Amazon DynamoDB table.
The DynamoDB table has Auto Scaling enabled for read and write capacity.
Which actions will diagnose and resolve the delay? (Choose two.)

  • A. Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.
  • B. Check the ApproximateAgeOfOldestMessage metric for the SQS queue and configure a redrive policy on the SQS queue.
  • C. Check the NumberOfMessagesSent metric for the SQS queue and increase the SQS queue visibility timeout.
  • D. Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table's Auto Scaling policy.
  • E. Check the Throttles metric for the Lambda function and increase the Lambda function timeout.

Q2

To diagnose and resolve the delay in reflecting the processing status of orders for the ecommerce company, the appropriate actions to take are:

  • A. Check the ApproximateAgeOfOldestMessage metric for the SQS queue and increase the Lambda function concurrency limit.

  • D. Check the ThrottledWriteRequests metric for the DynamoDB table and increase the maximum write capacity units for the table's Auto Scaling policy.

  • ✅ Here's why:

  1. A. Checking the ApproximateAgeOfOldestMessage Metric: This metric helps identify if messages are staying in the queue longer than expected, which indicates a processing backlog. Increasing the Lambda function's reserved concurrency limit can help process more messages concurrently, reducing the age of the oldest message and addressing delays.

  2. D. Checking the ThrottledWriteRequests Metric: If the DynamoDB table is experiencing write throttles, this can directly impact the speed at which processed orders are inserted into the table. Increasing the maximum write capacity units for Auto Scaling can alleviate these bottlenecks, ensuring that the write operations do not get throttled and orders are updated promptly.

  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Check the ApproximateAgeOfOldestMessage metric for the SQS queue and configure a redrive policy on the SQS queue.: While a redrive policy is useful for managing messages that fail to process correctly, it does not address the underlying issue of processing delays due to insufficient concurrency or database write throttles.

  • C. Check the NumberOfMessagesSent metric for the SQS queue and increase the SQS queue visibility timeout.: Increasing the visibility timeout may prevent messages from being processed by multiple consumers simultaneously, but it does not address the core issue of processing delays or database write capacity.

  • E. Check the Throttles metric for the Lambda function and increase the Lambda function timeout.: Increasing the function timeout may help in situations where the function execution time is the bottleneck. However, in this scenario, the delays are more likely related to the processing rate (handled by concurrency settings) and database write capacity, not the execution time of the Lambda function itself.

A2

A company has a single AWS account that runs hundreds of Amazon EC2 instances in a single AWS Region.
New EC2 instances are launched and terminated each hour in the account.
The account also includes existing EC2 instances that have been running for longer than a week.
The company's security policy requires all running EC2 instances to use an EC2 instance profile.
If an EC2 instance does not have an instance profile attached, the EC2 instance must use a default instance profile that has no IAM permissions assigned.
A DevOps engineer reviews the account and discovers EC2 instances that are running without an instance profile.
During the review, the DevOps engineer also observes that new EC2 instances are being launched without an instance profile.
Which solution will ensure that an instance profile is attached to all existing and future EC2 instances in the Region?

  • A. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to EC2 RunInstances API calls. Configure the rule to invoke an AWS Lambda function to attach the default instance profile to the EC2 instances.
  • B. Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.
  • C. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to EC2 Startlnstances API calls. Configure the rule to invoke an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.
  • D. Configure the iam-role-managed-policy-check AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Lambda function to attach the default instance profile to the EC2 instances.

Q3

The solution that ensures all existing and future EC2 instances in the Region have an instance profile attached is:

  • B. Configure the ec2-instance-profile-attached AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.

  • ✅ Here's why:

  1. Continuous Compliance Monitoring: AWS Config continuously monitors and records AWS resource configurations, enabling it to automatically detect EC2 instances launched without an instance profile.

  2. Automatic Remediation: Configuring an automatic remediation action with AWS Systems Manager Automation allows for the immediate attachment of the default instance profile to non-compliant EC2 instances, ensuring compliance with the security policy.

  3. Scalability and Efficiency: This approach provides a scalable and efficient method to enforce compliance across all EC2 instances in the Region, both existing and newly launched, without manual intervention.

  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to EC2 RunInstances API calls. Configure the rule to invoke an AWS Lambda function to attach the default instance profile to the EC2 instances.: This method focuses on new instances only and does not address existing instances. Additionally, it requires custom logic for attachment, increasing complexity.

  • C. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to EC2 StartInstances API calls. Configure the rule to invoke an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.: This approach does not capture instances at creation but rather at start, potentially missing instances that remain stopped for an extended period before starting.

  • D. Configure the iam-role-managed-policy-check AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Lambda function to attach the default instance profile to the EC2 instances.: This rule focuses on IAM roles and managed policies rather than directly addressing the attachment of instance profiles to EC2 instances, making it less relevant to the specific requirement.

A3

A DevOps engineer is building a continuous deployment pipeline for a serverless application that uses AWS Lambda functions.
The company wants to reduce the customer impact of an unsuccessful deployment.
The company also wants to monitor for issues.
Which deploy stage configuration will meet these requirements?

  • A. Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.
  • B. Use AWS CloudFormation to publish a new stack update, and include Amazon CloudWatch alarms on all resources. Set up an AWS CodePipeline approval action for a developer to verify and approve the AWS CloudFormation change set.
  • C. Use AWS CloudFormation to publish a new version on every stack update, and include Amazon CloudWatch alarms on all resources. Use the RoutingConfig property of the AWS:: Lambda:: Alias resource to update the traffic routing during the stack update.
  • D. Use AWS CodeBuild to add sample event payloads for testing to the Lambda functions. Publish a new version of the functions, and include Amazon CloudWatch alarms. Update the production alias to point to the new version. Configure rollbacks to occur when an alarm is in the ALARM state.

Q4

The deploy stage configuration that meets the requirements for reducing customer impact during deployment and monitoring for issues in a serverless application using AWS Lambda functions is:

  • A. Use an AWS Serverless Application Model (AWS SAM) template to define the serverless application. Use AWS CodeDeploy to deploy the Lambda functions with the Canary10Percent15Minutes Deployment Preference Type. Use Amazon CloudWatch alarms to monitor the health of the functions.

  • ✅ Here's why:

  1. Gradual Deployment: The Canary10Percent15Minutes deployment preference type ensures that only a small percentage of the traffic (10%) is routed to the new version of the Lambda function for 15 minutes initially. This approach minimizes the impact on customers if an issue arises with the new version.
  2. Monitoring and Rollback: Amazon CloudWatch alarms can be set to monitor the health of the Lambda functions during and after deployment. If issues are detected, AWS CodeDeploy can automatically roll back to the previous stable version, further reducing the potential impact on customers.
  3. Serverless Application Definition: Using AWS SAM to define the serverless application simplifies the process of deploying complex serverless applications, including Lambda functions, and integrates well with AWS CodeDeploy for deployment strategies.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Configure an Amazon CloudWatch Events event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon SNS topic.: While CloudWatch Events and Lambda can automate many tasks, Trusted Advisor does not specifically monitor or report on KMS key rotation policies or timelines.

  • C. Configure an Amazon EventBridge (Amazon CloudWatch Events) rule that reacts to EC2 StartInstances API calls. Configure the rule to invoke an AWS Systems Manager Automation runbook to attach the default instance profile to the EC2 instances.: This approach does not capture instances at creation but rather at start, potentially missing instances that remain stopped for an extended period before starting.

  • D. Configure the iam-role-managed-policy-check AWS Config managed rule with a trigger type of configuration changes. Configure an automatic remediation action that invokes an AWS Lambda function to attach the default instance profile to the EC2 instances.: This rule focuses on IAM roles and managed policies rather than directly addressing the attachment of instance profiles to EC2 instances, making it less relevant to the specific requirement.

A4

To run an application, a DevOps Engineer launches an Amazon EC2 instances with public IP addresses in a public subnet.
A user data script obtains the application artifacts and installs them on the instances upon launch.
A change to the security classification of the application now requires the instances to run with no access to the Internet.
While the instances launch successfully and show as healthy, the application does not seem to be installed.
Which of the following should successfully install the application while complying with the new rule?

  • A. Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.
  • B. Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.
  • C. Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.
  • D. Create a security group for the application instances and whitelist only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.

Q5

To successfully install the application while complying with the new security classification requiring no Internet access, the following solution is recommended:

  • C. Publish the application artifacts to an Amazon S3 bucket and create a VPC endpoint for S3. Assign an IAM instance profile to the EC2 instances so they can read the application artifacts from the S3 bucket.

  • ✅ Here's why:

  1. No Internet Access Required: By using an Amazon S3 bucket in conjunction with a VPC endpoint, the EC2 instances can access the application artifacts without needing to access the Internet. This setup adheres to the new security requirement.
  2. Secure Artifact Retrieval: Assigning an IAM instance profile to the EC2 instances provides secure, role-based access to the S3 bucket. This ensures that only authorized instances can retrieve the application artifacts.
  3. Simplifies Deployment: This approach simplifies the deployment process by leveraging AWS native services for secure artifact storage and retrieval without complicating the network architecture.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Launch the instances in a public subnet with Elastic IP addresses attached. Once the application is installed and running, run a script to disassociate the Elastic IP addresses afterwards.: This method temporarily provides Internet access, which violates the requirement for instances to run with no Internet access at all times.

  • B. Set up a NAT gateway. Deploy the EC2 instances to a private subnet. Update the private subnet's route table to use the NAT gateway as the default route.: While this approach allows instances in a private subnet to access the Internet indirectly, it still provides Internet access, which contradicts the security requirement.

  • D. Create a security group for the application instances and whitelist only outbound traffic to the artifact repository. Remove the security group rule once the install is complete.: Modifying security group rules to temporarily allow Internet access risks non-compliance with the no Internet access requirement and may not be feasible for all deployment scenarios.

A5

A development team is using AWS CodeCommit to version control application code and AWS CodePipeline to orchestrate software deployments.
The team has decided to use a remote master branch as the trigger for the pipeline to integrate code changes.
A developer has pushed code changes to the CodeCommit repository, but noticed that the pipeline had no reaction, even after 10 minutes.
Which of the following actions should be taken to troubleshoot this issue?

  • A. Check that an Amazon CloudWatch Events rule has been created for the master branch to trigger the pipeline.
  • B. Check that the CodePipeline service role has permission to access the CodeCommit repository.
  • C. Check that the developer's IAM role has permission to push to the CodeCommit repository.
  • D. Check to see if the pipeline failed to start because of CodeCommit errors in Amazon CloudWatch Logs.

Q6

To troubleshoot the issue of the pipeline not reacting to code changes pushed to the CodeCommit repository, the following action is recommended:

  • A. Check that an Amazon CloudWatch Events rule has been created for the master branch to trigger the pipeline.

  • ✅ Here's why:

  1. Trigger Configuration: Amazon CloudWatch Events (now known as Amazon EventBridge) is commonly used to detect changes in AWS services, such as commits to a CodeCommit repository, and trigger actions in response. If the pipeline didn't react, it's likely that the event rule set up to trigger the pipeline on changes to the master branch is missing or misconfigured.
  2. Immediate Response: Properly configured CloudWatch Events rules ensure that CodePipeline starts the deployment process immediately after changes are detected in the specified branch of the CodeCommit repository. This is crucial for continuous integration and continuous deployment (CI/CD) workflows.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Check that the CodePipeline service role has permission to access the CodeCommit repository.: While it's important for the CodePipeline service role to have access to the CodeCommit repository, this issue would typically prevent the pipeline from accessing the repository at all stages, not just from triggering on a commit. Since the scenario suggests the pipeline isn't starting, the problem is likely with the trigger mechanism.

  • C. Check that the developer's IAM role has permission to push to the CodeCommit repository.: If the developer was able to push code changes to the repository, it indicates that the developer's IAM role has the necessary permissions. The issue described does not relate to pushing code but to the pipeline not starting in response to these pushes.

  • D. Check to see if the pipeline failed to start because of CodeCommit errors in Amazon CloudWatch Logs.: While monitoring CloudWatch Logs for errors is a good practice, the absence of a reaction from the pipeline to a commit indicates a problem with the trigger mechanism rather than errors within the pipeline's execution stages or CodeCommit itself.

A6

A company's developers use Amazon EC2 instances as remote workstations.
The company is concerned that users can create or modify EC2 security groups to allow unrestricted inbound access.
A DevOps engineer needs to develop a solution to detect when users create unrestricted security group rules.
The solution must detect changes to security group rules in near real time, remove unrestricted rules, and send email notifications to the security team.
The DevOps engineer has created an AWS Lambda function that checks for security group ID from input, removes rules that grant unrestricted access, and sends notifications through Amazon Simple Notification Service (Amazon SNS).
What should the DevOps engineer do next to meet the requirements?

  • A. Configure the Lambda function to be invoked by the SNS topic. Create an AWS CloudTrail subscription for the SNS topic. Configure a subscription filter for security group modification events.
  • B. Create an Amazon EventBridge scheduled rule to invoke the Lambda function. Define a schedule pattern that runs the Lambda function every hour.
  • C. Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule’s event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function.
  • D. Create an Amazon EventBridge custom event bus that subscribes to events from all AWS services. Configure the Lambda function to be invoked by the custom event bus.

Q7

To meet the requirements for detecting changes to security group rules in near real time, removing unrestricted rules, and sending email notifications to the security team, the DevOps engineer should:

  • C. Create an Amazon EventBridge event rule that has the default event bus as the source. Define the rule’s event pattern to match EC2 security group creation and modification events. Configure the rule to invoke the Lambda function.

  • ✅ Here's why:

  1. Real-time Detection: Amazon EventBridge can detect changes to security group rules in real time. By setting up an event rule specifically for EC2 security group creation and modification events, the system ensures immediate detection and response.

  2. Direct Invocation of Lambda: Configuring the EventBridge rule to directly invoke the Lambda function simplifies the architecture by eliminating intermediate services. This direct invocation allows the Lambda function to promptly process the event, modify security groups as necessary, and send notifications.

  3. Focused Event Pattern: Defining an event pattern that specifically matches security group modifications ensures that the Lambda function is invoked only for relevant events, improving efficiency and reducing unnecessary invocations.

  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Configure the Lambda function to be invoked by the SNS topic. Create an AWS CloudTrail subscription for the SNS topic. Configure a subscription filter for security group modification events.: CloudTrail and SNS can be used for monitoring and notifications, but this approach does not provide the direct, real-time event handling capability that EventBridge offers for EC2 security group modifications.

  • B. Create an Amazon EventBridge scheduled rule to invoke the Lambda function. Define a schedule pattern that runs the Lambda function every hour.: This method would introduce delays up to an hour before detecting and responding to unrestricted security group rules, which does not meet the requirement for near real-time detection and response.

  • D. Create an Amazon EventBridge custom event bus that subscribes to events from all AWS services. Configure the Lambda function to be invoked by the custom event bus.: Creating a custom event bus and subscribing to all AWS service events is unnecessarily broad and complex for this specific requirement. The default event bus with a targeted event pattern is sufficient and more efficient for detecting and responding to EC2 security group modifications.

A7

A DevOps engineer is creating an AWS CloudFormation template to deploy a web service.
The web service will run on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB).
The DevOps engineer must ensure that the service can accept requests from clients that have IPv6 addresses.
What should the DevOps engineer do with the CloudFormation template so that IPv6 clients can access the web service?

  • A. Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.
  • B. Assign each EC2 instance an IPv6 Elastic IP address. Create a target group and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB.
  • C. Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.
  • D. Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443, and specify the dualstack IP address type on the ALB. Create a target group and add the EC2 instances as targets. Associate the target group with the ALB.

Q8

To ensure that IPv6 clients can access the web service hosted on Amazon EC2 instances in a private subnet behind an Application Load Balancer (ALB), the DevOps engineer should implement the following configuration in the AWS CloudFormation template:

  • D. Add an IPv6 CIDR block to the VPC and subnets for the ALB. Create a listener on port 443, and specify the dualstack IP address type on the ALB. Create a target group and add the EC2 instances as targets. Associate the target group with the ALB.

  • ✅ Here's why:

  1. IPv6 Support in VPC and ALB: Adding an IPv6 CIDR block to the VPC and subnets is necessary to support IPv6 traffic. The ALB needs to be configured with dualstack (IPv4 and IPv6) support to handle requests from both types of clients.

  2. Secure Listener Configuration: Creating a listener on port 443 ensures that the web service can securely accept HTTPS requests from clients, including those using IPv6 addresses.

  3. Target Group and EC2 Instance Association: By creating a target group and adding EC2 instances as targets, the ALB can route incoming requests to the appropriate backend instances running the web service. This setup does not require the EC2 instances themselves to have IPv6 addresses, as the ALB handles the IPv6 connectivity.

  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Add an IPv6 CIDR block to the VPC and the private subnet for the EC2 instances. Create route table entries for the IPv6 network, use EC2 instance types that support IPv6, and assign IPv6 addresses to each EC2 instance.: While adding IPv6 support to the VPC and subnet is necessary, directly assigning IPv6 addresses to EC2 instances in a private subnet does not address the requirement for these instances to be accessible from the internet without direct internet access.

  • B. Assign each EC2 instance an IPv6 Elastic IP address. Create a target group and add the EC2 instances as targets. Create a listener on port 443 of the ALB, and associate the target group with the ALB.: Elastic IP addresses are IPv4 addresses, and this approach does not provide a solution for enabling IPv6 client access to the ALB and, consequently, to the web service.

  • C. Replace the ALB with a Network Load Balancer (NLB). Add an IPv6 CIDR block to the VPC and subnets for the NLB, and assign the NLB an IPv6 Elastic IP address.: While NLBs support IPv6, this scenario specifies the use of an ALB, which is more appropriate for HTTP/HTTPS web services. Moreover, NLBs do not use Elastic IP addresses for IPv6; the approach described does not align with how IPv6 is implemented in AWS.

A8

A company uses AWS Organizations and AWS Control Tower to manage all the company's AWS accounts.
The company uses the Enterprise Support plan.
A DevOps engineer is using Account Factory for Terraform (AFT) to provision new accounts.
When new accounts are provisioned, the DevOps engineer notices that the support plan for the new accounts is set to the Basic Support plan.
The DevOps engineer needs to implement a solution to provision the new accounts with the Enterprise Support plan.
Which solution will meet these requirements?

  • A. Use an AWS Config conformance pack to deploy the account-part-of-organizations AWS Config rule and to automatically remediate any noncompliant accounts.
  • B. Create an AWS Lambda function to create a ticket for AWS Support to add the account to the Enterprise Support plan. Grant the Lambda function the support:ResolveCase permission.
  • C. Add an additional value to the control_tower_parameters input to set the AWSEnterpriseSupport parameter as the organization's management account number.
  • D. Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.

Q9

To ensure that new accounts provisioned using Account Factory for Terraform (AFT) are set up with the Enterprise Support plan, aligning with the company's use of AWS Organizations and AWS Control Tower, the following solution is recommended:

  • D. Set the aft_feature_enterprise_support feature flag to True in the AFT deployment input configuration. Redeploy AFT and apply the changes.

  • ✅ Here's why:

  1. Automated Support Plan Alignment: Enabling the aft_feature_enterprise_support feature flag within AFT's configuration directly addresses the requirement to provision new accounts with the Enterprise Support plan. This feature ensures that new accounts inherit the appropriate support level without manual intervention.

  2. Seamless Integration: Utilizing AFT to manage support plans ensures that the provisioning process is fully integrated with the company's existing infrastructure as code practices, simplifying management and ensuring consistency across all accounts.

  3. Efficiency and Scalability: This approach allows for the efficient and scalable provisioning of new AWS accounts with the desired support plan, making it ideal for organizations managing multiple accounts.

  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Use an AWS Config conformance pack to deploy the account-part-of-organizations AWS Config rule and to automatically remediate any noncompliant accounts.: While AWS Config can monitor compliance with certain configurations, it does not directly influence the support plan of AWS accounts, making this option ineffective for the stated requirement.

  • B. Create an AWS Lambda function to create a ticket for AWS Support to add the account to the Enterprise Support plan. Grant the Lambda function the support:ResolveCase permission.: This approach requires manual intervention and does not provide an automated or scalable solution for ensuring that new accounts are provisioned with the Enterprise Support plan.

  • C. Add an additional value to the control_tower_parameters input to set the AWSEnterpriseSupport parameter as the organization's management account number.: The control tower parameters and the concept of setting an AWSEnterpriseSupport parameter in this manner do not directly apply to provisioning support plans for new AWS accounts via AFT.

A9

A company's DevOps engineer uses AWS Systems Manager to perform maintenance tasks during maintenance windows.
The company has a few Amazon EC2 instances that require a restart after notifications from AWS Health.
The DevOps engineer needs to implement an automated solution to remediate these notifications.
The DevOps engineer creates an Amazon EventBridge rule.
How should the DevOps engineer configure the EventBridge rule to meet these requirements?

  • A. Configure an event source of AWS Health, a service of EC2. and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance.
  • B. Configure an event source of Systems Manager and an event type that indicates a maintenance window. Target a Systems Manager document to restart the EC2 instance.
  • C. Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.
  • D. Configure an event source of EC2 and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.

Q10

To meet the requirements of automatically remediating notifications from AWS Health about Amazon EC2 instance maintenance by restarting instances as necessary, the DevOps engineer should configure the Amazon EventBridge rule as follows:

  • A. Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a Systems Manager document to restart the EC2 instance.

  • ✅ Here's why:

  1. Direct Integration with AWS Health: AWS Health provides detailed alerts and notifications about AWS environment health, including specific events related to EC2 instance maintenance. Configuring EventBridge to listen for these specific AWS Health events ensures that the automated response is triggered by relevant health notifications.

  2. Use of Systems Manager Automation: Targeting a Systems Manager Automation document (SSM document) to restart the EC2 instance allows for the execution of predefined actions such as instance restarts. This method leverages AWS native tools for operational tasks, ensuring a seamless and efficient remediation process.

  3. Automated Response: This setup enables an automated, immediate response to maintenance notifications, minimizing downtime and ensuring that the EC2 instances are running on updated and healthy configurations without manual intervention.

  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Configure an event source of Systems Manager and an event type that indicates a maintenance window. Target a Systems Manager document to restart the EC2 instance.: This option does not directly address the requirement to respond to AWS Health events. It focuses on Systems Manager maintenance windows rather than health notifications related to instance maintenance.

  • C. Configure an event source of AWS Health, a service of EC2, and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.: While this could be a valid approach, it introduces additional complexity by requiring a custom Lambda function. The direct use of a Systems Manager document simplifies the process.

  • D. Configure an event source of EC2 and an event type that indicates instance maintenance. Target a newly created AWS Lambda function that registers an automation task to restart the EC2 instance during a maintenance window.: This option inaccurately identifies EC2 as the event source for maintenance notifications, which are actually provided by AWS Health. Moreover, it unnecessarily complicates the solution by involving a custom Lambda function for a task that can be directly handled by Systems Manager.

A10

Thanks

for

Watching