AWS Certified DevOps Engineer - Professional - QnA - Part 1

A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB). The ALB routes requests to an AWS Lambda function. Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users. The version of the application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application. The company needs to gather a metric for each API operation by response code for each version of the application that is in use. A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?

  • A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
  • B. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.
  • C. Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.
  • D. Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.

Q1

The correct answer is:

  • A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.

  • ✅ Here's why:

  1. Direct Integration with CloudWatch Logs: Writing log lines directly to Amazon CloudWatch Logs enables immediate access to operational data.
  2. Custom Metrics with Filters: CloudWatch Logs metric filters can parse log files to extract meaningful metrics, allowing for custom metric creation based on the content of the log lines.
  3. Dimension Specification: Specifying response code and application version as dimensions allows for detailed analysis and tracking of metrics across different app versions and response statuses.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query to populate CloudWatch metrics from the log lines. Specify response code and application version as dimensions for the metric.: While CloudWatch Logs Insights provides powerful querying capabilities, it is not designed for real-time metric creation and aggregation like metric filters.

  • C. Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the Lambda function to respond to the ALB with the API operation name, response code, and version number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric.: This option introduces unnecessary complexity by using ALB response metadata for metrics. It relies on the assumption that ALB logs can accurately capture and correlate API response metadata, which is not as straightforward as logging directly from the Lambda function.

  • D. Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create an X-Ray subsegment with the API operation name, response code, and version number. Configure X-Ray insights to extract an aggregated metric for each API operation name and to publish the metric to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.: While AWS X-Ray provides detailed tracing capabilities, it is more suited for debugging and performance monitoring rather than for creating aggregated metrics based on log data. This approach also requires more setup and maintenance compared to directly using CloudWatch Logs and metric filters.

A1

A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache webserver.
The Development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application.
After completion, the team will create additional deployment groups for staging and production The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.
How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?

  • A. Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the Afterinstall lifecycle hook in the appspec.yml file.
  • B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instances is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file
  • C. Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.
  • D. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.

Q2

The correct answer is:

  • B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instances are part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.

  • ✅ Here's why:

  1. Simplicity and Efficiency: Utilizing the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME directly in a script allows for dynamic log level configuration without the need for multiple script versions or external API calls.
  2. Minimal Management Overhead: This approach requires no additional setup beyond the script itself, avoiding the management of tags or custom environment variables.
  3. Lifecycle Hook Appropriateness: The BeforeInstall lifecycle hook is an optimal choice for configuring settings before the application installation begins, ensuring that the log level is correctly set for each deployment.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the application revision that calls the metadata service and the EC2 API to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference the script as part of the AfterInstall lifecycle hook in the appspec.yml file.: This option introduces unnecessary complexity by requiring instance tagging and API calls, increasing management overhead.

  • C. Create a CodeDeploy custom environment variable for each environment. Then place a script into the application revision that checks this environment variable to identify which deployment group the instance is part of. Use this information to configure the log level settings. Reference this script as part of the ValidateService lifecycle hook in the appspec.yml file.: Custom environment variables are not a native feature of CodeDeploy, which complicates configuration and management unnecessarily.

  • D. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to identify which deployment group the instance is part of to configure the log level settings. Reference this script as part of the Install lifecycle hook in the appspec.yml file.: While using DEPLOYMENT_GROUP_ID could theoretically work, it's less intuitive than DEPLOYMENT_GROUP_NAME for human operators to manage and configure, especially when dealing with multiple environments.

A2

A company provides an application to customers.
The application has an Amazon API Gateway REST API that invokes an AWS Lambda function.
On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table.
The data load process results in long cold-start times of 8-10 seconds.
The DynamoDB table has DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to requests.
The application receives thousands of requests throughout the day.
In the middle of the day, the application experiences 10 times more requests than at any other time of the day.
Near the end of the day, the application's request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.
Which solution will meet these requirements?

  • A. Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.
  • B. Configure reserved concurrency on the Lambda function with a concurrency value of 0.
  • C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
  • D. Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.

Q3

The correct answer is:

  • C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.

  • ✅ Here's why:

  1. Reduces Cold Start Latency: Provisioned concurrency initializes a specified number of Lambda function instances in advance, reducing the cold start time significantly.
  2. Dynamic Scalability: AWS Application Auto Scaling automatically adjusts the provisioned concurrency based on the demand, ensuring that the function can handle spikes in requests efficiently.
  3. Cost Optimization: Auto Scaling adjusts the concurrency levels based on usage patterns, ensuring that the function is both performant and cost-effective throughout the day.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.: Deleting DAX removes a key optimization for DynamoDB access, potentially increasing latency. A single provisioned concurrency may not handle high traffic spikes effectively.

  • B. Configure reserved concurrency on the Lambda function with a concurrency value of 0.: Setting reserved concurrency to 0 would prevent the Lambda function from executing, making the application unavailable.

  • D. Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.: Reserved concurrency limits the number of concurrent executions, which could throttle the application during peak times. Auto Scaling on API Gateway does not address Lambda cold start latency.

A3

A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency.
This requirement Includes EBS volumes that do not require backups.
The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency.
An audit finds that developers are occasionally not tagging the EBS volumes.
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?

  • A. Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
  • B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.
  • C. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.
  • D. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.

Q4

The correct answer is:

  • B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup_Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.

  • ✅ Here's why:

  1. Continuous Monitoring: AWS Config continuously monitors and records AWS resource configurations, allowing it to detect EBS volumes without the required tags.
  2. Automated Remediation: The ability to configure remediation actions through AWS Systems Manager Automation runbooks enables the automatic application of the Backup_Frequency tag, ensuring compliance.
  3. Managed Rule Utilization: Using a managed rule for detecting missing tags simplifies setup and maintenance, focusing on EC2::Volume resources specifically.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly.: Creating a custom rule for a requirement that can be met by a managed rule adds unnecessary complexity and management overhead.

  • C. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.: This option does not account for existing volumes that are untagged; it only addresses new volumes.

  • D. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.: Similar to C, this method is reactive rather than proactive, and it might not ensure all volumes are tagged consistently, especially if volumes are not modified after creation.

A4

A company is using an Amazon Aurora cluster as the data store for its application.
The Aurora cluster is configured with a single DB instance.
The application performs read and write operations on the database by using the cluster's instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming maintenance window.
The cluster must remain available with the least possible interruption during the maintenance window.
What should a DevOps engineer do to meet these requirements?

  • A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.
  • B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.
  • C. Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.
  • D. Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations

Q5

The correct answer is:

  • A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads.

  • ✅ Here's why:

  1. High Availability: Adding a reader instance to the cluster ensures that read operations can continue without interruption even when the primary instance is being updated or is unavailable.
  2. Load Distribution: Directing read operations to the reader endpoint and write operations to the cluster endpoint optimizes the load distribution, improving performance.
  3. Maintenance Flexibility: With separate instances for reading and writing, maintenance operations on the cluster can be performed with minimal impact on the application's availability.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations.: Using a custom ANY endpoint does not provide the same level of control or optimization for read and write operations compared to using dedicated endpoints.

  • C. Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.: Aurora automatically provides high availability and fault tolerance without needing to explicitly turn on a Multi-AZ option as you would with RDS. This choice is not applicable to Aurora's architecture.

  • D. Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster. Update the application to use the Aurora cluster's custom ANY endpoint for read and write operations: Similar to option C, the concept of turning on Multi-AZ as described does not apply to Aurora in the same way it does for RDS. Additionally, using a custom ANY endpoint for both read and write operations does not leverage the full capabilities of Aurora's read scaling.

A5

A company must encrypt all AMIs that the company shares across accounts.
A DevOps engineer has access to a source account where an unencrypted custom AMI has been built.
The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI.
The DevOps engineer must share the AMI with the target account.
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)

 

 

  • A. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.
  • B. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.
  • C. In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.
  • D. In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.
  • E. In the source account, share the unencrypted AMI with the target account.
  • F. In the source account, share the encrypted AMI with the target account.

Q6

The steps required to meet the company's requirements are:

  • A. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.

  • D. In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.

  • F. In the source account, share the encrypted AMI with the target account.

  • ✅ Here's why:

  1. Encryption Requirement: Encrypting the AMI with a KMS key in the source account ensures that all copies of the AMI are encrypted, fulfilling the company's encryption policy.
  2. Permission Delegation: Modifying the KMS key policy and creating a grant in the target account enables the Auto Scaling group to use the encrypted AMI, ensuring that the target account can launch EC2 instances from the AMI.
  3. Sharing the Encrypted AMI: Sharing the encrypted AMI, rather than the unencrypted version, ensures that only encrypted AMIs are used across accounts, maintaining security and compliance.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the default Amazon Elastic Block Store (Amazon EBS) encryption key in the copy action.: This does not specify using the company-created KMS key, which could result in using a less secure default EBS encryption key.

  • C. In the source account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role in the target account.: This action alone does not ensure that the AMI is encrypted or shared properly; it's also less efficient compared to modifying the key policy for broader access control.

  • E. In the source account, share the unencrypted AMI with the target account.: This directly contradicts the company's requirement to encrypt all AMIs shared across accounts.

A6

Q7

A company uses AWS CodePipeline pipelines to automate releases of its application. A typical pipeline consists of three stages: build, test, and deployment. The company has been using a separate AWS CodeBuild project to run scripts for each stage. However, the company now wants to use AWS CodeDeploy to handle the deployment stage of the pipelines.

The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances. The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI.

Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)

  • A. Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.
  • B. Create a new version of the common AMI with the CodeDeploy agent installed. Create an AppSpec file that contains application deployment scripts and grants access to CodeDeploy.
  • C. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Add a step to the CodePipeline pipeline to use EC2 Image Builder to create a new AMI. Configure CodeDeploy to deploy the newly created AMI.
  • D. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
  • E. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the EC2 instances that are launched from the common AMI as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.

The steps required to meet the company's requirements for automating releases with AWS CodeDeploy are:

  • A. Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.

  • D. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.

  • ✅ Here's why:

  1. Necessity of CodeDeploy Agent: For AWS CodeDeploy to function, the CodeDeploy agent must be installed on the EC2 instances. This agent facilitates the deployment process.
  2. IAM Role Configuration: Updating the IAM role of the EC2 instances to allow access to CodeDeploy ensures that the CodeDeploy agent has the necessary permissions to perform deployments.
  3. Integration with CodePipeline: By updating the CodePipeline pipeline to use the CodeDeploy action, the company automates the deployment stage, leveraging CodeDeploy for efficient, reliable application deployments.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Create a new version of the common AMI with the CodeDeploy agent installed. Create an AppSpec file that contains application deployment scripts and grants access to CodeDeploy.: While creating an AppSpec file is necessary for defining deployment actions, it does not grant access to CodeDeploy. Access is managed through IAM roles and permissions, not AppSpec files.

  • C. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Add a step to the CodePipeline pipeline to use EC2 Image Builder to create a new AMI. Configure CodeDeploy to deploy the newly created AMI.: This option involves unnecessary steps. CodeDeploy does not deploy AMIs; it deploys application revisions to instances. EC2 Image Builder is not needed for the described deployment process.

  • E. Create an application in CodeDeploy. Configure an in-place deployment type. Specify the EC2 instances that are launched from the common AMI as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.: Specifying individual EC2 instances instead of the Auto Scaling group lacks scalability and does not fully utilize the Auto Scaling group's capabilities for dynamic scaling.

A7

A company’s security team requires that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are associated with AWS WAF web ACLs.
The company has hundreds of AWS accounts, all of which are included in a single organization in AWS Organizations.
The company has configured AWS Config for the organization.
During an audit, the company finds some externally facing ALBs that are not associated with AWS WAF web ACLs.
Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)

 

  • A. Delegate AWS Firewall Manager to a security account.
  • B. Delegate Amazon GuardDuty to a security account.
  • C. Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
  • D. Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
  • E. Configure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

Q8

To ensure that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are compliant with the company's security requirement to be associated with AWS WAF web ACLs, the steps to be taken are:

  • A. Delegate AWS Firewall Manager to a security account.

  • C. Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.

  • ✅ Here's why:

  1. Centralized Management: Delegating AWS Firewall Manager to a security account allows for centralized management of firewall policies, including AWS WAF web ACLs, across all accounts in the organization.
  2. Automatic Enforcement: By creating an AWS Firewall Manager policy to attach AWS WAF web ACLs, any new ALBs or API Gateway APIs will automatically be associated with web ACLs, ensuring immediate compliance upon creation.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • B. Delegate Amazon GuardDuty to a security account.: While GuardDuty is a comprehensive threat detection service, it does not manage or enforce the association of AWS WAF web ACLs with ALBs or API Gateway APIs.

  • D. Create an Amazon GuardDuty policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.: GuardDuty focuses on threat detection and monitoring; it does not have the capability to create policies for attaching WAF web ACLs.

  • E. Configure an AWS Config managed rule to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.: AWS Config can evaluate the configuration of resources against desired settings, but it does not directly attach WAF web ACLs to resources. Its role is more about monitoring and reporting rather than enforcing specific configurations like attaching web ACLs.

A8

A company uses AWS KMS with CMKs and manual key rotation to meet regulatory compliance requirements.
The security team wants to be notified when any keys have not been rotated after 90 days.
Which solution will accomplish this?

 

  • A. Configure AWS KMS to publish to an Amazon SNS topic when keys are more than 90 days old.
  • B. Configure an Amazon CloudWatch Events event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon SNS topic.
  • C. Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old.
  • D. Configure AWS Security Hub to publish to an Amazon SNS topic when keys are more than 90 days old.

Q9

The solution to accomplish the notification when AWS KMS keys have not been rotated after 90 days is:

  • C. Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old.

  • ✅ Here's why:

  1. Customizability and Specificity: AWS Config allows for the creation of custom rules tailored to specific compliance requirements, such as key rotation policies. This makes it possible to detect when a CMK has not been rotated within a specific timeframe.
  2. Automated Notifications: Once a custom rule identifies CMKs that have not been rotated within 90 days, it can automatically trigger a notification through Amazon SNS, ensuring timely alerts to the security team.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Configure AWS KMS to publish to an Amazon SNS topic when keys are more than 90 days old.: AWS KMS does not natively support direct notifications based on key rotation age or criteria.

  • B. Configure an Amazon CloudWatch Events event to launch an AWS Lambda function to call the AWS Trusted Advisor API and publish to an Amazon SNS topic.: While CloudWatch Events and Lambda can automate many tasks, Trusted Advisor does not specifically monitor or report on KMS key rotation policies or timelines.

  • D. Configure AWS Security Hub to publish to an Amazon SNS topic when keys are more than 90 days old.: Security Hub aggregates security findings from various AWS services but does not directly monitor or notify based on the age of KMS key rotation without a specific, related finding or custom integration.

A9

A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request.
The Security team does not allow unauthenticated requests to S3 buckets for this project.
How can this issue be corrected in the MOST secure manner?

 

  • A. Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.
  • B. Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.
  • C. Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.
  • D. Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.

Q10

The most secure way to correct the issue of downloading a database population script from an Amazon S3 bucket using an unauthenticated request is:

  • C. Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.

  • ✅ Here's why:

  1. Secure Access: Removing unauthenticated access and using a service role with specific Amazon S3 permissions enforces the principle of least privilege, ensuring that only the CodeBuild project can access the required resources.
  2. Compliance with Security Policy: This approach aligns with the security team's requirement to avoid unauthenticated requests, by ensuring that all access to the S3 bucket is authenticated and authorized.
  3. Use of AWS CLI: Utilizing the AWS CLI for downloading the script takes advantage of the existing AWS security infrastructure, including the use of temporary credentials provided to the CodeBuild environment.
  • 🔴 Now, let's examine why the other options are not the best choice:

  • A. Add the bucket name to the AllowedBuckets section of the CodeBuild project settings. Update the build spec to use the AWS CLI to download the database population script.: While this allows the script download, it doesn't specifically address the removal of unauthenticated access or the securing of the bucket access method.

  • B. Modify the S3 bucket settings to enable HTTPS basic authentication and specify a token. Update the build spec to use cURL to pass the token and download the database population script.: S3 does not support HTTPS basic authentication in this manner. Authentication and authorization for S3 are managed through AWS IAM and bucket policies.

  • D. Remove unauthenticated access from the S3 bucket with a bucket policy. Use the AWS CLI to download the database population script using an IAM access key and a secret access key.: It's not recommended to use static IAM access keys within AWS CodeBuild projects. Instead, AWS services provide temporary credentials to services like CodeBuild for accessing other AWS resources securely.

A10

Thanks

for

Watching