AWS Certified DevOps Engineer - Professional - QnA - Part 1

A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB).
The ALB routes requests to an AWS Lambda function.
Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users.
The version of the application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application.
The company needs to gather a metric for each API operation by response code for each version of the application that is in use.
A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?

  1. πŸš€ Option A: Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
  2. πŸš€ Option B: Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.
  3. πŸš€ Option C: Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
  4. πŸš€ Option D: Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.

Q1

The Correct Answer is:

  • βœ… "Option A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric."

  • βœ… Here's why:

  1. βœ… Direct Logging: Writing directly to CloudWatch Logs enables immediate capture of API operation details along with response codes and version numbers, which is essential for monitoring and troubleshooting.
  2. βœ… Metric Filters: CloudWatch Logs metric filters can parse log data to extract and increment metrics based on specific criteria such as API operation names, allowing for detailed analysis and alerting.
  3. βœ… Custom Dimensions: Specifying response code and application version as dimensions provides the granularity needed to track performance and issues across different app versions and responses.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: While CloudWatch Logs Insights provides powerful querying capabilities, it is more suited for ad-hoc analysis rather than the continuous metric gathering required in this scenario.

  • ❌ Option C: Although ALB access logs are valuable for understanding incoming traffic patterns, relying on them alone misses the opportunity to directly capture and analyze detailed application-level metrics from within the Lambda function.

  • ❌ Option D: AWS X-Ray provides detailed tracing capabilities that are useful for debugging and performance analysis, but it is not primarily focused on metric aggregation and analysis for operational monitoring in the way that CloudWatch metrics are.

A1

A company provides an application to customers.
The application has an Amazon API Gateway REST API that invokes an AWS Lambda function.
On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table.
The data load process results in long cold-start times of 8-10 seconds.
The DynamoDB table has DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to requests.
The application receives thousands of requests throughout the day.
In the middle of the day, the application experiences 10 times more requests than at any other time of the day.
Near the end of the day, the application's request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.
Which solution will meet these requirements?

  1. πŸš€ Option A: Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.
  2. πŸš€ Option B: Configure reserved concurrency on the Lambda function with a concurrency value of 0.
  3. πŸš€ Option C: Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
  4. πŸš€ Option D: Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.

Q2

The Correct Answer is:

  • βœ… "Option C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100."

  • βœ… Here's why:

  1. βœ… Reduced Cold Start Times: Provisioned concurrency initializes a specified number of instances ahead of time, reducing the cold start latency that affects the initial response time of the Lambda function.
  2. βœ… Scalability: AWS Application Auto Scaling adjusts the provisioned concurrency levels based on demand, ensuring that the Lambda function can handle spikes in request volume during peak times without introducing additional latency.
  3. βœ… Cost-Effective: Dynamically adjusting the provisioned concurrency values ensures that resources are efficiently utilized, balancing performance needs with cost.
  4. βœ… Continuous Availability: Ensuring that a minimum level of provisioned concurrency is always available means that the Lambda function is always ready to serve requests quickly, even during periods of low usage.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: This option does not adequately address the fluctuating demand throughout the day and unnecessarily suggests removing DAX, which could otherwise benefit read performance for DynamoDB.

  • ❌ Option B: Setting reserved concurrency to 0 effectively disables the Lambda function, preventing it from processing any requests, which does not meet the requirement to reduce latency.

  • ❌ Option D: Reserved concurrency limits the maximum number of concurrent executions, which could restrict the ability to handle spikes in traffic. Additionally, configuring Auto Scaling on API Gateway does not address Lambda function cold start latency.

A2

A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache webserver.
The Development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application.
After completion, the team will create additional deployment groups for staging and production The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.
How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?

  1. πŸš€ Option A: Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.
  2. πŸš€ Option B: Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instances is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.
  3. πŸš€ Option C: Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.
  4. πŸš€ Option D: Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.

Q3

The Correct Answer is:

  • βœ… "Option B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instances is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file"

  • βœ… Here's why:

  1. βœ… Simplicity: Utilizing CodeDeploy's built-in environment variable DEPLOYMENT_GROUP_NAME allows for a straightforward method to determine the deployment group without additional complexity.
  2. βœ… No Additional Management: This approach does not require managing separate scripts for each deployment group or using external services to identify the deployment group, minimizing overhead.
  3. βœ… Early Configuration: By referencing the script in the BeforeInstall lifecycle hook, log level settings are configured before the application is installed, ensuring that the correct log levels are used throughout the deployment process.
  4. βœ… Flexibility: This method provides flexibility to dynamically change log levels based on the deployment group without altering the application code or the deployment process.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: Requires tagging and external calls, increasing complexity and dependency on proper tagging and metadata availability.

  • ❌ Option C: Introduces unnecessary overhead in managing custom environment variables for each deployment group.

  • ❌ Option D: DEPLOYMENT_GROUP_ID may not be as intuitive or easily managed as DEPLOYMENT_GROUP_NAME for configuring settings based on the deployment group's human-readable name.

A3

A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency.
This requirement Includes EBS volumes that do not require backups.
The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency.
An audit finds that developers are occasionally not tagging the EBS volumes.
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?

  1. πŸš€ Option A: Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
  2. πŸš€ Option B: Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
  3. πŸš€ Option C: Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
  4. πŸš€ Option D: Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.

Q4

The Correct Answer is:

  • βœ… "Option B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly."

  • βœ… Here's why:

  1. βœ… Specific Focus on EBS Volumes: This solution directly targets EC2::Volume resources, which are the Amazon EBS volumes requiring the Backup_Frequency tag, ensuring that the rule is applied precisely where needed.
  2. βœ… Automatic Remediation: The use of a custom AWS Systems Manager Automation runbook for remediation simplifies the process of applying the missing Backup_Frequency tag automatically, reducing the need for manual intervention.
  3. βœ… Minimal Management Overhead: Leveraging AWS Config and its managed rules for compliance checks, along with Systems Manager for automated remediation, minimizes the management overhead.
  4. βœ… Ensures Compliance: This approach guarantees that all EBS volumes are tagged with Backup_Frequency, meeting the company's backup requirements without requiring different manual processes or scripts.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: This option is broader than necessary, targeting all EC2 resources rather than focusing specifically on EBS volumes.

  • ❌ Option C: While proactive, this solution only addresses new volumes at the time of creation and might miss existing untagged volumes.

  • ❌ Option D: Similar to option C, this method focuses on new or modified volumes but doesn't ensure that all existing volumes are compliant, potentially leaving gaps in coverage.

A4

A company is using an Amazon Aurora cluster as the data store for its application.
The Aurora cluster is configured with a single DB instance.
The application performs read and write operations on the database by using the cluster's instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming maintenance window.
The cluster must remain available with the least possible interruption during the maintenance window.
What should a DevOps engineer do to meet these requirements?

  1. πŸš€ Option A: Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.
  2. πŸš€ Option B: Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
  3. πŸš€ Option C: Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.
  4. πŸš€ Option D: Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.

Q5

The Correct Answer is:

  • βœ… "Option A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads."

  • βœ… Here's why:

  1. βœ… High Availability: Adding a reader instance increases the Aurora cluster's fault tolerance and availability, ensuring that read operations can continue without interruption even during maintenance activities on the primary instance.
  2. βœ… Load Distribution: Directing read operations to the reader endpoint and write operations to the cluster endpoint effectively distributes the load, which can enhance performance and reduce the impact of maintenance activities on the application's operations.
  3. βœ… Seamless Maintenance: This setup allows the primary instance to be updated without disrupting the application's ability to perform read operations, ensuring that the application remains available to users with minimal interruption.
  4. βœ… Simplicity in Implementation: Updating the application's configuration to utilize the appropriate endpoints for read and write operations is a straightforward change that leverages existing Aurora capabilities without the need for custom solutions.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: While this approach also adds a reader instance, using a custom ANY endpoint for both read and write operations does not explicitly segregate traffic and might not optimally utilize the reader instance during maintenance.

  • ❌ Option C: Aurora automatically operates across multiple availability zones without needing to explicitly turn on a Multi-AZ option like RDS. This option doesn't directly address the requirement to maintain availability during maintenance.

  • ❌ Option D: Similar to option C, this misunderstands Aurora's built-in high availability features and the use of a custom ANY endpoint does not provide the targeted benefit of separating read and write operations to maintain availability during updates.

A5

A company must encrypt all AMIs that the company shares across accounts.
A DevOps engineer has access to a source account where an unencrypted custom AMI has been built.
The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI.
The DevOps engineer must share the AMI with the target account.
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)

  1. πŸš€ Option A: In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.
  2. πŸš€ Option B: In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.
  3. πŸš€ Option C: In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.
  4. πŸš€ Option D: In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.
  5. πŸš€ Option E: In the source account, share the unencrypted AMI with the target account.
  6. πŸš€ Option F: In the source account, share the encrypted AMI with the target account.

Q6

The Correct Answers are:

  • βœ… Option A: In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.

  • βœ… Option D: In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.

  • βœ… Option F: In the source account, share the encrypted AMI with the target account.

  • βœ… Here's why:

  1. βœ… Encryption Requirement: Option A is necessary because it directly addresses the need to encrypt the AMI using a specific KMS key, ensuring that the AMI shared across accounts meets the encryption requirements.
  2. βœ… Permission Management: Option D is essential for allowing the target account to use the encrypted AMI. Modifying the key policy to allow the target account to create a grant, and then creating a grant in the target account, ensures that the Auto Scaling group can launch EC2 instances from the encrypted AMI.
  3. βœ… Sharing the Encrypted AMI: Option F ensures that the now-encrypted AMI is shared with the target account, making it available for use by the target account's Auto Scaling group.
  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: Copying the AMI with the default Amazon EBS encryption key does not meet the requirement of using a specific AWS KMS key created by the company.

  • ❌ Option C: Creating a KMS grant in the source account without specifying further action does not complete the process of sharing the encrypted AMI with the target account or ensuring it can be used there.

  • ❌ Option E: Sharing the unencrypted AMI does not meet the company's requirement to encrypt all AMIs shared across accounts.

A6

A company uses AWS CodePipeline pipelines to automate releases of its application.
A typical pipeline consists of three stages: build, test, and deployment.
The company has been using a separate AWS CodeBuild project to run scripts for each stage.
However, the company now wants to use AWS CodeDeploy to handle the deployment stage of the pipelines.
The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances.
The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI.
Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)

  1. πŸš€ Option A: Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.
  2. πŸš€ Option B: Create a new version of the common AMI with the CodeDeploy agent installed. Create an AppSpec file that contains application deployment scripts and grants access to CodeDeploy.
  3. πŸš€ Option C: Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Add a step to the CodePipeline pipeline to use EC2 Image Builder to create a new AMI. Configure CodeDeploy to deploy the newly created AMI.
  4. πŸš€ Option D: Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
  5. πŸš€ Option E: Create an application in CodeDeploy. Configure an in-place deployment type. Specify the EC2 instances that are launched from the common AMI as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.

Q7

The Correct Answers are:

  • βœ… Option A: Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.

  • βœ… Option D: Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.

  • βœ… Here's why:

  1. βœ… Necessity of CodeDeploy Agent: For AWS CodeDeploy to function, the instances it deploys to must have the CodeDeploy agent installed. This is critical for enabling the EC2 instances in the Auto Scaling group to interact with CodeDeploy for deployment operations.

  2. βœ… IAM Role Configuration: Updating the IAM role of the EC2 instances to include permissions for accessing CodeDeploy is crucial for authenticating and authorizing the instances to pull deployment instructions from CodeDeploy.

  3. βœ… Deployment Configuration in CodeDeploy: Setting up an application in CodeDeploy and configuring it for in-place deployment ensures that existing instances can be updated with the new application version directly, leveraging the Auto Scaling group to identify the deployment targets.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: While creating an AppSpec file is necessary for CodeDeploy deployments, simply having the AppSpec file without integrating the deployment process into CodePipeline or ensuring the CodeDeploy agent is installed on the AMI does not meet all the requirements for automated deployment.

  • ❌ Option C: Adding a step to use EC2 Image Builder to create a new AMI is an unnecessary complexity for the scenario described. The key requirement is to deploy an application packaged as an RPM, which can be done without creating a new AMI for each deployment.

  • ❌ Option E: This option is redundant with option D but is less specific about using the common AMI and does not add any necessary detail that would make it a better choice over option D.

A7

A company’s security team requires that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are associated with AWS WAF web ACLs.
The company has hundreds of AWS accounts, all of which are included in a single organization in AWS Organizations.
The company has configured AWS Config for the organization.
During an audit, the company finds some externally facing ALBs that are not associated with AWS WAF web ACLs.
Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)

  1. πŸš€ Option A: Delegate AWS Firewall Manager to a security account.
  2. πŸš€ Option B: Delegate Amazon GuardDuty to a security account.
  3. πŸš€ Option C: Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
  4. πŸš€ Option D: Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs. This option is not applicable as GuardDuty does not directly manage WAF ACLs.
  5. πŸš€ Option E: Configure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

Q8

The Correct Answers are:

  • βœ… Option A: Delegate AWS Firewall Manager to a security account.

  • βœ… Option C: Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

  • βœ… Here's why:

  1. βœ… Centralized Management: Delegating AWS Firewall Manager to a security account allows centralized management of security policies, including WAF web ACLs, across all accounts within the organization. This streamlines the process of ensuring compliance with the security team's requirements.

  2. βœ… Automated Policy Application: Creating an AWS Firewall Manager policy to automatically attach AWS WAF web ACLs to newly created ALBs and API Gateway APIs ensures that all new resources are immediately compliant with the company’s security policies. This preemptively prevents the issue of resources being deployed without the necessary WAF protections.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option B: Delegating Amazon GuardDuty to a security account, while useful for threat detection and continuous monitoring, does not directly address the requirement to associate AWS WAF web ACLs with ALBs and API Gateway APIs.

  • ❌ Option D: Amazon GuardDuty does not have the capability to attach AWS WAF web ACLs to resources. GuardDuty is focused on security analysis and threat detection, not on managing WAF associations.

  • ❌ Option E: While AWS Config can detect configurations that do not comply with the company’s policies, it does not have the built-in capability to automatically attach AWS WAF web ACLs to ALBs and API Gateway APIs. AWS Config can be used to identify non-compliant resources but would require additional steps or custom automation to remediate the issues.

A8

A company uses AWS KMS with CMKs and manual key rotation to meet regulatory compliance requirements.
The security team wants to be notified when any keys have not been rotated after 90 days.
Which solution will accomplish this?

  1. πŸš€ Option A: Configure AWS KMS to publish to an Amazon SNS topic when keys are more than 90 days old.
  2. πŸš€ Option B: Configure an Amazon CloudWatch Events event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon SNS topic.
  3. πŸš€ Option C: Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old.
  4. πŸš€ Option D: Configure AWS Security Hub to publish to an Amazon SNS topic when keys are more than 90 days old.

Q9

The Correct Answer is:

  • βœ… Option C: Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old.

  • βœ… Here's why:

  1. βœ… Custom Monitoring: AWS Config allows for the development of custom rules tailored to specific compliance requirements, such as monitoring the rotation of AWS KMS keys. This ensures that any keys not rotated within 90 days can be identified, meeting regulatory compliance needs.

  2. βœ… Notification Capability: By publishing notifications to an Amazon SNS topic, stakeholders can be immediately informed about non-compliance, enabling prompt action to rotate keys and maintain compliance.

  3. βœ… Automation and Scalability: AWS Config can automatically monitor and evaluate AWS resources across an entire AWS environment, providing a scalable solution to compliance monitoring.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: AWS KMS does not natively support publishing notifications to an Amazon SNS topic based on the age of keys. This functionality would need to be implemented via custom monitoring or AWS Config.

  • ❌ Option B: While CloudWatch Events and AWS Lambda can be used for custom monitoring solutions, relying on the Trusted Advisor API for this specific requirement might not directly support monitoring the rotation age of KMS keys. Trusted Advisor focuses on broader best practice recommendations.

  • ❌ Option D: AWS Security Hub aggregates security findings but does not directly support custom monitoring for specific compliance rules like the age of KMS keys without integration with other services like AWS Config.

A9

A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request.
The Security team does not allow unauthenticated requests to S3 buckets for this project.
How can this issue be corrected in the MOST secure manner?

  1. πŸš€ Option A: Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.
  2. πŸš€ Option B: Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.
  3. πŸš€ Option C: Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.
  4. πŸš€ Option D: Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.

Q10

The Correct Answer is:

  • βœ… Option C: Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.

  • βœ… Here's why:

  1. βœ… Security Compliance: Removing unauthenticated access and enforcing access control through IAM roles aligns with AWS best practices for securing S3 buckets, ensuring only authorized entities can access sensitive resources.

  2. βœ… IAM Role Configuration: Modifying the service role for CodeBuild to include Amazon S3 access provides a secure, scalable way to manage permissions, leveraging AWS's built-in security mechanisms without exposing sensitive credentials.

  3. βœ… Utilizing AWS CLI: Using the AWS CLI with appropriate permissions allows for secure, efficient interactions with AWS services, ensuring scripts and resources are accessed securely within AWS's ecosystem.

  • πŸ”΄ Now, let's examine why the other options are not the best choice:

  • ❌ Option A: While specifying allowed buckets in CodeBuild project settings is a step towards securing access, it does not fully address the security concern of unauthenticated access as identified by the security team.

  • ❌ Option B: S3 does not support HTTPS basic authentication with tokens in this manner. This option does not align with AWS security practices and capabilities.

  • ❌ Option D: Using IAM access keys and secret access keys directly in build specifications or scripts is not recommended due to the risk of key compromise. It is more secure and manageable to use IAM roles and policies for access control.

A10

Thanks

for

Watching

AWS Certified DevOps Engineer - Professional - QnA - Part 1

By Deepak Dubey

AWS Certified DevOps Engineer - Professional - QnA - Part 1

AWS Certified DevOps Engineer - Professional - QnA - Part 1

  • 51