AWS Certified DevOps Engineer - Professional - QnA - Part 3

A company has containerized all of its in-house quality control applications.
The company is running Jenkins on Amazon EC2, which requires patching and upgrading.
The Compliance Officer has requested a DevOps Engineer begin encrypting build artifacts since they contain company intellectual property.
What should the DevOps Engineer do to accomplish this in the MOST maintainable manner?

  1. πŸš€ Option A: Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default.
  2. πŸš€ Option B: Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled.
  3. πŸš€ Option C: Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager.
  4. πŸš€ Option D: Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on Amazon EC2.

Q1

To accomplish the encryption of build artifacts containing company intellectual property in the most maintainable manner, the DevOps Engineer should take the following approach:

  • βœ… Option D: Use AWS CodeBuild with artifact encryption to replace the Jenkins instance running on Amazon EC2.

  • βœ… Here's why:

  1. Managed Service: AWS CodeBuild is a fully managed build service that provides a scalable and secure environment for compiling source code, running tests, and producing software packages. It significantly reduces the overhead associated with managing infrastructure for builds.

  2. Built-in Encryption: AWS CodeBuild automatically encrypts build artifacts stored in Amazon S3, using AWS Key Management Service (KMS). This ensures that artifacts are encrypted at rest without requiring additional steps from the DevOps team.

  3. Integration and Automation: CodeBuild seamlessly integrates with other AWS services, such as AWS CodePipeline, for a complete CI/CD pipeline. This integration facilitates automating the build and deployment process, including artifact encryption.

  4. Reduced Maintenance Overhead: Replacing Jenkins, which requires manual patching and upgrading, with CodeBuild eliminates the need for ongoing server maintenance. This shift allows the DevOps team to focus more on development and less on operational tasks.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Automate patching and upgrading using AWS Systems Manager on EC2 instances and encrypt Amazon EBS volumes by default.Β While this improves security and management of the EC2 instances, it does not directly address the need for encrypting build artifacts.

  • ❌ Option B: Deploy Jenkins to an Amazon ECS cluster and copy build artifacts to an Amazon S3 bucket with default encryption enabled.Β This option might encrypt the artifacts but still involves managing Jenkins, which can be more labor-intensive than using AWS CodeBuild.

  • ❌ Option C: Leverage AWS CodePipeline with a build action and encrypt the artifacts using AWS Secrets Manager.Β AWS Secrets Manager is designed for managing secrets, not for artifact encryption. This option could complicate the process unnecessarily compared to the streamlined encryption capabilities offered by CodeBuild.

A1

An IT team has built an AWS CloudFormation template so others in the company can quickly and reliably deploy and terminate an application.
The template creates an Amazon EC2 instance with a user data script to install the application and an Amazon S3 bucket that the application uses to serve static webpages while it is running.
All resources should be removed when the CloudFormation stack is deleted.
However, the team observes that CloudFormation reports an error during stack deletion, and the S3 bucket created by the stack is not deleted.
How can the team resolve the error in the MOST efficient manner to ensure that all resources are deleted without errors?

  1. πŸš€ Option A: Add DeletionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted.
  2. πŸš€ Option B: Add a custom resource when an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Write the Lambda function to delete all objects from the bucket when the RequestType is Delete.
  3. πŸš€ Option C: Identify the resource that was not deleted. From the S3 console, empty the S3 bucket and then delete it.
  4. πŸš€ Option D: Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resource. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket.

Q2

To resolve the error in the most efficient manner and ensure that all resources, including the Amazon S3 bucket created by the AWS CloudFormation stack, are deleted without errors, the IT team should implement the following solution:

  • βœ… Option B: Add a custom resource that invokes an AWS Lambda function with the DependsOn attribute specifying the S3 bucket, and an IAM role. Write the Lambda function to delete all objects from the bucket when the RequestType is Delete.

  • βœ… Here's why:

  1. βœ… Automated Cleanup: This approach automates the process of emptying the S3 bucket before stack deletion, addressing the common issue where CloudFormation cannot delete a non-empty S3 bucket.

  2. βœ… Dependable Execution: By using the DependsOn attribute, the solution ensures that the Lambda function is invoked to clear the bucket only after all dependencies (in this case, the S3 bucket) are properly established, providing a reliable cleanup process.

  3. βœ… Permission Management: Assigning an IAM role to the Lambda function ensures it has the necessary permissions to delete objects from the S3 bucket, adhering to AWS's best practices for security and access control.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Add DeletionPolicy attribute to the S3 bucket resource, with the value Delete forcing the bucket to be removed when the stack is deleted.Β While adding a DeletionPolicy of Delete will ensure the bucket is deleted, it does not address the issue of the bucket being non-empty, which is a common cause of deletion failures.

  • ❌ Option C: Identify the resource that was not deleted. From the S3 console, empty the S3 bucket and then delete it.Β This manual process is less efficient and does not provide an automated solution for future stack deletions, requiring manual intervention each time.

  • ❌ Option D: Replace the EC2 and S3 bucket resources with a single AWS OpsWorks Stacks resource. Define a custom recipe for the stack to create and delete the EC2 instance and the S3 bucket.Β This approach significantly changes the infrastructure setup and introduces complexity without directly addressing the issue of ensuring the S3 bucket is empty before deletion.

A2

A company has an AWS CodePipeline pipeline that is configured with an Amazon S3 bucket in the eu-west-1 Region.
The pipeline deploys an AWS Lambda application to the same Region.
The pipeline consists of an AWS CodeBuild project build action and an AWS CloudFormation deploy action.
The CodeBuild project uses the aws cloudformation package AWS CLI command to build an artifact that contains the Lambda function code’s .zip file and the CloudFormation template.
The CloudFormation deploy action references the CloudFormation template from the output artifact of the CodeBuild project’s build action.
The company wants to also deploy the Lambda application to the us-east-1 Region by using the pipeline in eu-west-1.
A DevOps engineer has already updated the CodeBuild project to use the aws cloudformation package command to produce an additional output artifact for us-east-1.
Which combination of additional steps should the DevOps engineer take to meet these requirements? (Choose two.)

  1. πŸš€ Option A: Modify the CloudFormation template to include a parameter for the Lambda function code’s zip file location. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override.
  2. πŸš€ Option B: Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.
  3. πŸš€ Option C: Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.
  4. πŸš€ Option D: Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu-west-1 to the S3 bucket in us-east-1.
  5. πŸš€ Option E: Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

Q3

To meet the requirements of deploying the Lambda application to the us-east-1 Region using the pipeline in eu-west-1, while ensuring that all resources are efficiently managed and deployed without errors, the DevOps engineer should take the following steps:

  • βœ… Option C: Create an S3 bucket in us-east-1. Configure the S3 bucket policy to allow CodePipeline to have read and write access.

  • βœ… Option E: Modify the pipeline to include the S3 bucket for us-east-1 as an artifact store. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.

  • βœ… Here's why:

  1. βœ… Accessibility and Permissions: By creating an S3 bucket in us-east-1 and configuring its policy for CodePipeline access, this ensures that the pipeline has the necessary permissions to read and write artifacts across regions, facilitating seamless deployment actions.

  2. βœ… Artifact Management: Modifying the pipeline to include the S3 bucket for us-east-1 as an artifact store, and configuring the CloudFormation deploy action to use the template from this artifact ensures that the correct version of the application is deployed, leveraging regional artifact storage for efficiency.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Modify the CloudFormation template to include a parameter for the Lambda function code’s zip file location. Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to pass in the us-east-1 artifact location as a parameter override.Β While this option allows for parameterization of artifact locations, it does not address the need for regional artifact storage and cross-region permissions.

  • ❌ Option B: Create a new CloudFormation deploy action for us-east-1 in the pipeline. Configure the new deploy action to use the CloudFormation template from the us-east-1 output artifact.Β Without configuring an S3 bucket in us-east-1 for artifact storage and permissions, this option alone is incomplete for the deployment process.

  • ❌ Option D: Create an S3 bucket in us-east-1. Configure S3 Cross-Region Replication (CRR) from the S3 bucket in eu-west-1 to the S3 bucket in us-east-1.Β While CRR provides a method for replicating artifacts between regions, it introduces additional complexity and costs, and it is not necessary for the pipeline to function across regions when direct artifact storage and access are configured.

A3

A company runs an application on one Amazon EC2 instance.
Application metadata is stored in Amazon S3 and must be retrieved if the instance is restarted.
The instance must restart or relaunch automatically if the instance becomes unresponsive.
Which solution will meet these requirements?

  1. πŸš€ Option A: Create an Amazon CloudWatch alarm for the StatusCheckFailed metric. Use the recover action to stop and start the instance. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
  2. πŸš€ Option B: Configure AWS OpsWorks, and use the auto healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance.
  3. πŸš€ Option C: Use EC2 Auto Recovery to automatically stop and start the instance in case of a failure. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.
  4. πŸš€ Option D: Use AWS CloudFormation to create an EC2 instance that includes the UserData property for the EC2 resource. Add a command in UserData to retrieve the application metadata from Amazon S3.

Q4

To meet the requirements of restarting or relaunching an Amazon EC2 instance automatically if it becomes unresponsive and ensuring application metadata stored in Amazon S3 is retrieved upon restart, the most suitable solution is:

  • βœ… Option B: Configure AWS OpsWorks, and use the auto healing feature to stop and start the instance. Use a lifecycle event in OpsWorks to pull the metadata from Amazon S3 and update it on the instance.

  • βœ… Here's why:

  1. βœ… Automated Recovery: AWS OpsWorks' auto healing feature automatically stops and starts the instance if it becomes unresponsive, ensuring high availability without manual intervention.

  2. βœ… Metadata Management: Lifecycle events in OpsWorks can be configured to automatically pull application metadata from Amazon S3 and update it on the instance upon restart, meeting the requirement for metadata retrieval.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Create an Amazon CloudWatch alarm for the StatusCheckFailed metric. Use the recover action to stop and start the instance. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.Β This option does not inherently manage the retrieval and update of metadata from S3 upon instance recovery.

  • ❌ Option C: Use EC2 Auto Recovery to automatically stop and start the instance in case of a failure. Use an S3 event notification to push the metadata to the instance when the instance is back up and running.Β While EC2 Auto Recovery addresses instance recovery, it doesn't directly facilitate the process of updating metadata from S3 on the instance.

  • ❌ Option D: Use AWS CloudFormation to create an EC2 instance that includes the UserData property for the EC2 resource. Add a command in UserData to retrieve the application metadata from Amazon S3.Β Although this option allows for the initial retrieval of metadata during instance creation, it doesn't provide a built-in mechanism for automatically restarting or relaunching the instance in case it becomes unresponsive.

A4

A company hosts a security auditing application in an AWS account.
The auditing application uses an IAM role to access other AWS accounts.
All the accounts are in the same organization in AWS Organizations.
A recent security audit revealed that users in the audited AWS accounts could modify or delete the auditing application's IAM role.
The company needs to prevent any modification to the auditing application's IAM role by any entity other than a trusted administrator IAM role.
Which solution will meet these requirements?

  1. πŸš€ Option A: Create an SCP that includes a Deny statement for changes to the auditing application's IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the SCP to the root of the organization.
  2. πŸš€ Option B: Create an SCP that includes an Allow statement for changes to the auditing application's IAM role by the trusted administrator IAM role. Include a Deny statement for changes by all other IAM principals. Attach the SCP to the IAM service in each AWS account where the auditing application has an IAM role.
  3. πŸš€ Option C: Create an IAM permissions boundary that includes a Deny statement for changes to the auditing application's IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the permissions boundary to the audited AWS accounts.
  4. πŸš€ Option D: Create an IAM permissions boundary that includes a Deny statement for changes to the auditing application’s IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the permissions boundary to the auditing application's IAM role in the AWS accounts.

Q5

  • βœ… Option A: Create an SCP that includes a Deny statement for changes to the auditing application's IAM role. Include a condition that allows the trusted administrator IAM role to make changes. Attach the SCP to the root of the organization.

  • βœ… Here's why:

    1. βœ… Centralized Control: Attaching the SCP to the root of the organization ensures that the policy applies across all accounts in the organization, providing a centralized method of control.

    2. βœ… Selective Permission: The SCP allows for a condition that specifically permits a trusted administrator IAM role to make changes, effectively implementing the principle of least privilege.

    3. βœ… Prevents Unauthorized Changes: By denying changes to the IAM role by any entity other than the specified administrator role, the solution directly addresses the requirement to protect the auditing application’s IAM role from modification.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: Creating an SCP with an Allow statement and a Deny statement for all other IAM principals does not leverage the hierarchical nature of AWS Organizations for centralized management. It also introduces complexity in maintaining allow/deny lists across multiple accounts.

  • ❌ Option C: Using an IAM permissions boundary to deny changes limits control to the specific IAM role rather than leveraging organizational-level policies to prevent unauthorized modifications across all accounts.

  • ❌ Option D: While attaching a permissions boundary to the IAM role includes conditions for a trusted administrator, it doesn't utilize the organizational controls available through SCPs for broad enforcement and simplicity.

A5

A company has an on-premises that is written in Go.
A DevOps engineer must move the application to AWS.
The company's development team wants to enable blue/green deployments and perform A/B testing.
Which solution will meet these requirements?

  1. πŸš€ Option A: Deploy the application on an Amazon EC2 instance and create an AMI of this instance. Use this AMI to create an automatic scaling launch configuration that is used in an Auto Scaling group. Use an Elastic Load Balancer to distribute traffic. When changes are made to the application, a new AMI will be created, which will initiate an EC2 instance refresh.
  2. πŸš€ Option B: Use Amazon Lightsail to deploy the application. Store the application in a zipped format in an Amazon S3 bucket. Use this zipped version to deploy new versions of the application to Lightsail. Use Lightsail deployment options to manage the deployment.
  3. πŸš€ Option C: Use AWS CodeArtifact to store the application code. Use AWS CodeDeploy to deploy the application to a fleet of Amazon EC2 instances. Use Elastic Load Balancing to distribute the traffic to the EC2 instances. When making changes to the application, upload a new version to CodeArtifact and create a new CodeDeploy deployment.
  4. πŸš€ Option D: Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3, and use that location to deploy new versions of the application using Elastic Beanstalk to manage the deployment options.

Q6

  • βœ… Option D: Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3, and use that location to deploy new versions of the application using Elastic Beanstalk to manage the deployment options.

  • βœ… Here's why:

    1. βœ… Simplified Deployment and Management: AWS Elastic Beanstalk simplifies the process of deploying and managing applications in the AWS cloud without worrying about the infrastructure that runs those applications. It automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring.

    2. βœ… Blue/Green Deployments: Elastic Beanstalk supports blue/green deployments, allowing the company to easily switch between versions of the application without downtime, which is crucial for performing A/B testing.

    3. βœ… Integration with Amazon S3: By storing the application in a zipped format in Amazon S3, the company can easily manage application versions and roll back if necessary. Elastic Beanstalk can pull these versions directly from S3 for deployment.

    4. βœ… Managed Service: Elastic Beanstalk is a managed service, meaning AWS takes care of the underlying EC2 instances, security updates, load balancing, and auto-scaling, reducing the maintenance burden on the DevOps engineer.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Deploy the application on an Amazon EC2 instance and create an AMI of this instance: This option requires manual management of AMIs and EC2 instances, increasing complexity and maintenance effort. It doesn't inherently support blue/green deployments or A/B testing without additional configuration.

  • ❌ Option B: Use Amazon Lightsail to deploy the application: While Lightsail is a great option for simple deployments, it does not offer the same level of deployment flexibility or the built-in features for blue/green deployments and A/B testing compared to Elastic Beanstalk.

  • ❌ Option C: Use AWS CodeArtifact to store the application code and AWS CodeDeploy to deploy the application: This option requires more setup and management of the deployment process compared to Elastic Beanstalk. While it supports complex deployment strategies, it might be overkill for this scenario and does not offer the same level of simplicity and integration.

A6

A Developer is maintaining a fleet of 50 Amazon EC2 Linux servers.
The servers are part of an Amazon EC2 Auto Scaling group, and also use Elastic Load Balancing for load balancing.
Occasionally, some application servers are being terminated after failing ELB HTTP health checks.
The Developer would like to perform a root cause analysis on the issue, but before being able to access application logs, the server is terminated.
How can log collection be automated?

  1. πŸš€ Option A: Use Auto Scaling lifecycle hooks to put instances in a Pending:Wait state. Create an Amazon CloudWatch Alarm for EC2 Instance Terminate and trigger an AWS Lambda function that executes an SSM Run Command script to collect logs, push them to Amazon S3, and complete the Successful lifecycle action once logs are collected.
  2. πŸš€ Option B: Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create a Config rule for EC2 Instance-terminate Lifecycle and trigger a step function that executes a script to collect logs, push them to Amazon S3, and complete the lifecycle action once logs are collected.
  3. πŸš€ Option C: Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch subscription filter for EC2 Instance and trigger a CloudWatch agent that executes a script to called logs, push them to Amazon S3, and complete the lifecycle action Terminate Successful once logs are collected.
  4. πŸš€ Option D: Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch Events rule for EC2 Instance- and trigger an AWS Lambda function that executes a SSM Run Command script to collect logs, push them to Amazon S3, terminate Lifecycle Action and complete the lifecycle action once logs are collected.

Q7

The correct answer to automate log collection for EC2 instances before they are terminated is:

  • βœ… Option D: Use Auto Scaling lifecycle hooks to put instances in a Terminating:Wait state. Create an Amazon CloudWatch Events rule for EC2 Instance- and trigger an AWS Lambda function that executes a SSM Run Command script to collect logs, push them to Amazon S3, terminate Lifecycle Action and complete the lifecycle action once logs are collected.

  • βœ… Here's why:

  1. βœ… Lifecycle Hooks: Auto Scaling lifecycle hooks allow you to pause the instance before it is terminated, providing an opportunity to perform custom actions such as log collection.
  2. βœ… Amazon CloudWatch Events: By creating a CloudWatch Events rule, you can automatically trigger a response to specific AWS events, such as EC2 instance termination.
  3. βœ… AWS Lambda and SSM Run Command: A Lambda function can execute SSM Run Command scripts to collect logs from the terminating instance and push them to S3, ensuring that logs are preserved for analysis.
  4. βœ… Automated and Scalable: This solution automates the log collection process and is scalable, suitable for a fleet of any size, and ensures that no manual intervention is required.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: While it puts instances in a wait state and uses a Lambda function, the Pending:Wait state is not appropriate for collecting logs from terminating instances.

  • ❌ Option B: This option inaccurately references Config rules and step functions for a task that is more effectively managed through lifecycle hooks and direct action like in option D.

  • ❌ Option C: The description is inaccurate in terms of how CloudWatch subscription filters and agents work, making it less effective for the specific need of collecting logs from terminating instances.

A7

A company has an organization in AWS Organizations.
The organization includes workload accounts that contain enterprise applications.
The company centrally manages users from an operations account.
No users can be created in the workload accounts.
The company recently added an operations team and must provide the operations team members with administrator access to each workload account.
Which combination of actions will provide this access? (Choose three.)

  1. πŸš€ Option A: Create a SysAdmin role in the operations account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the workload accounts.
  2. πŸš€ Option B: Create a SysAdmin role in each workload account. Attach the AdministratorAccess policy to the role. Modify the trust relationship to allow the sts:AssumeRole action from the operations account.
  3. πŸš€ Option C: Create an Amazon Cognito identity pool in the operations account. Attach the SysAdmin role as an authenticated role.
  4. πŸš€ Option D: In the operations account, create an IAM user for each operations team member.
  5. πŸš€ Option E: In the operations account, create an IAM user group that is named SysAdmins. Add an IAM policy that allows the sts:AssumeRole action for the SysAdmin role in each workload account. Add all operations team members to the group.
  6. πŸš€ Option F: Create an Amazon Cognito user pool in the operations account. Create an Amazon Cognito user for each operations team member.

Q8

To provide administrator access to the operations team for each workload account while ensuring no users are created in the workload accounts, the best approach involves creating roles and managing permissions centrally from the operations account. Here's how the selected options fulfill the requirements:

  1. βœ… Option B: Creating a SysAdmin role in each workload account with the AdministratorAccess policy attached ensures that the operations team can have full administrative capabilities in those accounts. Modifying the trust relationship to allow the sts:AssumeRole action from the operations account enables centralized management of access, where operations team members can assume the SysAdmin role as needed without having individual user accounts in each workload account.

  2. βœ… Option D: Creating an IAM user for each operations team member in the operations account is a secure way to manage team members' identities centrally. This setup allows for easier management of credentials and permissions from a single account.

  3. βœ… Option E: Creating an IAM user group named SysAdmins in the operations account and adding an IAM policy that allows sts:AssumeRole action for the SysAdmin role in each workload account streamlines the permission model. This allows all operations team members added to the SysAdmins group to assume the necessary roles across workload accounts, facilitating a centralized approach to access management.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Creating a SysAdmin role in the operations account to be assumed by the workload accounts reverses the intended direction of access control. The requirement is to manage access from the operations account to the workload accounts, not the other way around.

  • ❌ Option C: An Amazon Cognito identity pool in the operations account does not address the requirement for IAM role assumption across AWS accounts within AWS Organizations.

  • ❌ Option F: Creating an Amazon Cognito user pool for operations team members does not directly facilitate the assumption of roles within the workload accounts, as it is more suited for application-level authentication rather than AWS resource access management.

A8

A company has multiple accounts in an organization in AWS Organizations.
The company's SecOps team needs to receive an Amazon Simple Notification Service (Amazon SNS) notification if any account in the organization turns off the Block Public Access feature on an Amazon S3 bucket.
A DevOps engineer must implement this change without affecting the operation of any AWS accounts.
The implementation must ensure that individual member accounts in the organization cannot turn off the notification.
Which solution will meet these requirements?

  1. πŸš€ Option A: Designate an account to be the delegated Amazon GuardDuty administrator account. Turn on GuardDuty for all accounts across the organization. In the GuardDuty administrator account, create an SNS topic. Subscribe the SecOps team's email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for GuardDuty findings and a target of the SNS topic.
  2. πŸš€ Option B: Create an AWS CloudFormation template that creates an SNS topic and subscribes the SecOps team’s email address to the SNS topic. In the template, include an Amazon EventBridge rule that uses an event pattern of CloudTrail activity for s3:PutBucketPublicAccessBlock and a target of the SNS topic. Deploy the stack to every account in the organization by using CloudFormation StackSets.
  3. πŸš€ Option C: Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic. Subscribe the SecOps team's email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team.
  4. πŸš€ Option D: Turn on Amazon Inspector across the organization. In the Amazon Inspector delegated administrator account, create an SNS topic. Subscribe the SecOps team’s email address to the SNS topic. In the same account, create an Amazon EventBridge rule that uses an event pattern for public network exposure of the S3 bucket and publishes an event to the SNS topic to notify the SecOps team.

Q9

The Correct Answer is:.

  • βœ… Option C: Turn on AWS Config across the organization. In the delegated administrator account, create an SNS topic. Subscribe the SecOps team's email address to the SNS topic. Deploy a conformance pack that uses the s3-bucket-level-public-access-prohibited AWS Config managed rule in each account and uses an AWS Systems Manager document to publish an event to the SNS topic to notify the SecOps team.

  • βœ… Here's why:

  1. βœ… Centralized Control and Automation: AWS Config provides a way to centrally manage and monitor configurations across an organization in AWS Organizations. By turning on AWS Config across the organization, the company ensures that all accounts are monitored for compliance with security policies, including S3 bucket public access settings.

  2. βœ… Immediate Notification: By creating an SNS topic and subscribing the SecOps team's email address, the company ensures that the SecOps team is immediately notified of any changes to the Block Public Access setting on S3 buckets. This allows for quick response to potential security risks.

  3. βœ… Conformance Packs for Organizational Compliance: Deploying a conformance pack with the s3-bucket-level-public-access-prohibited AWS Config managed rule ensures that all S3 buckets across the organization adhere to the policy of prohibiting public access unless explicitly allowed. This rule directly addresses the requirement to monitor and notify about changes to the Block Public Access feature.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Designating an account as the Amazon GuardDuty administrator and using GuardDuty findings does not directly address the specific requirement of monitoring and notifying about changes to S3 bucket public access settings. GuardDuty is focused on threat detection and continuous monitoring for malicious activity, not configuration compliance.

  • ❌ Option B: Creating a CloudFormation template for deploying an SNS topic and EventBridge rule for CloudTrail activity is a valid approach for automation but does not ensure organization-wide compliance or prevent member accounts from disabling the notification. This method also requires deployment in every account, which adds complexity.

  • ❌ Option D: Turning on Amazon Inspector across the organization and creating notifications for public network exposure of S3 buckets may not directly capture changes to the Block Public Access settings. Inspector is designed for network assessments and vulnerability management rather than monitoring specific service configurations like S3 public access.

A9

A company has migrated its container-based applications to Amazon EKS and wants to establish automated email notifications.
The notifications sent to each email address are for specific activities related to EKS components.
The solution will include Amazon SNS topics and an AWS Lambda function to evaluate incoming log events and publish messages to the correct SNS topic.
Which logging solution will support these requirements?

  1. πŸš€ Option A: Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
  2. πŸš€ Option B: Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon CloudWatch Events events that trigger Lambda.
  3. πŸš€ Option C: Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.
  4. πŸš€ Option D: Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.

Q10

The Correct Answer is:.

  • βœ… Option A: Enable Amazon CloudWatch Logs to log the EKS components. Create a CloudWatch subscription filter for each component with Lambda as the subscription feed destination.

  • βœ… Here's why:

  1. βœ… Integration with Amazon EKS: Amazon CloudWatch Logs directly integrates with Amazon EKS to capture logs. This allows for a streamlined process to collect and analyze logs from EKS components.

  2. βœ… Automated Processing: Using CloudWatch subscription filters, log data can be automatically processed. When configured with AWS Lambda as the subscription feed destination, it enables dynamic evaluation of log events and the automated routing of notifications via Amazon SNS topics.

  3. βœ… Flexibility and Scalability: This approach offers flexibility in log analysis and scalability to handle large volumes of log data, making it suitable for dynamic container environments like EKS.

  • ❌ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: Enable Amazon CloudWatch Logs to log the EKS components. Create CloudWatch Logs Insights queries linked to Amazon CloudWatch Events events that trigger Lambda.Β While CloudWatch Logs Insights provides powerful log analysis capabilities, it does not inherently automate the notification process based on log analysis, requiring manual intervention for notification setup.

  • ❌ Option C: Enable Amazon S3 logging for the EKS components. Configure an Amazon CloudWatch subscription filter for each component with Lambda as the subscription feed destination.Β EKS does not directly log to Amazon S3, making this option less straightforward and introducing unnecessary complexity.

  • ❌ Option D: Enable Amazon S3 logging for the EKS components. Configure S3 PUT Object event notifications with AWS Lambda as the destination.Β Similar to Option C, this method involves additional steps and complexities since EKS components do not natively log to S3, making it less efficient for real-time log analysis and notification.

A10

Thanks

for

Watching