A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB).
The ALB routes requests to an AWS Lambda function.
Many different versions of the application are in use at any given time, including versions that are in testing by a subset of users.
The version of the application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application.
The company needs to gather a metric for each API operation by response code for each version of the application that is in use.
A DevOps engineer has modified the Lambda function to extract the API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?
Q1
The Correct Answer is:
β "Option A. Modify the Lambda function to write the API operation name, response code, and version number as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that increments a metric for each API operation name. Specify response code and application version as dimensions for the metric."
β Here's why:
π΄ Now, let's examine why the other options are not the best choice:
β Option B: While CloudWatch Logs Insights provides powerful querying capabilities, it is more suited for ad-hoc analysis rather than the continuous metric gathering required in this scenario.
β Option C: Although ALB access logs are valuable for understanding incoming traffic patterns, relying on them alone misses the opportunity to directly capture and analyze detailed application-level metrics from within the Lambda function.
β Option D: AWS X-Ray provides detailed tracing capabilities that are useful for debugging and performance analysis, but it is not primarily focused on metric aggregation and analysis for operational monitoring in the way that CloudWatch metrics are.
A1
A company provides an application to customers.
The application has an Amazon API Gateway REST API that invokes an AWS Lambda function.
On initialization, the Lambda function loads a large amount of data from an Amazon DynamoDB table.
The data load process results in long cold-start times of 8-10 seconds.
The DynamoDB table has DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to requests.
The application receives thousands of requests throughout the day.
In the middle of the day, the application experiences 10 times more requests than at any other time of the day.
Near the end of the day, the application's request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.
Which solution will meet these requirements?
Q2
The Correct Answer is:
β "Option C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100."
β Here's why:
π΄ Now, let's examine why the other options are not the best choice:
β Option A: This option does not adequately address the fluctuating demand throughout the day and unnecessarily suggests removing DAX, which could otherwise benefit read performance for DynamoDB.
β Option B: Setting reserved concurrency to 0 effectively disables the Lambda function, preventing it from processing any requests, which does not meet the requirement to reduce latency.
β Option D: Reserved concurrency limits the maximum number of concurrent executions, which could restrict the ability to handle spikes in traffic. Additionally, configuring Auto Scaling on API Gateway does not address Lambda function cold start latency.
A2
A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache Tomcat application with an Apache webserver.
The Development team started with a proof of concept, created a deployment group for a developer environment, and performed functional tests within the application.
After completion, the team will create additional deployment groups for staging and production The current log level is configured within the Apache settings, but the team wants to change this configuration dynamically when the deployment occurs, so that they can set different log level configurations depending on the deployment group without having a different application revision for each group.
How can these requirements be met with the LEAST management overhead and without requiring different script versions for each deployment group?
Q3
The Correct Answer is:
β "Option B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_NAME to identify which deployment group the instances is part of. Use this information to configure the log level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file"
β Here's why:
π΄ Now, let's examine why the other options are not the best choice:
β Option A: Requires tagging and external calls, increasing complexity and dependency on proper tagging and metadata availability.
β Option C: Introduces unnecessary overhead in managing custom environment variables for each deployment group.
β Option D: DEPLOYMENT_GROUP_ID may not be as intuitive or easily managed as DEPLOYMENT_GROUP_NAME for configuring settings based on the deployment group's human-readable name.
A3
A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an account to indicate a desired backup frequency.
This requirement Includes EBS volumes that do not require backups.
The company uses custom tags named Backup_Frequency that have values of none, dally, or weekly that correspond to the desired backup frequency.
An audit finds that developers are occasionally not tagging the EBS volumes.
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?
Q4
The Correct Answer is:
β "Option B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation action that uses a custom AWS Systems Manager Automation runbook to apply the Backup_Frequency tag with a value of weekly."
β Here's why:
π΄ Now, let's examine why the other options are not the best choice:
β Option A: This option is broader than necessary, targeting all EC2 resources rather than focusing specifically on EBS volumes.
β Option C: While proactive, this solution only addresses new volumes at the time of creation and might miss existing untagged volumes.
β Option D: Similar to option C, this method focuses on new or modified volumes but doesn't ensure that all existing volumes are compliant, potentially leaving gaps in coverage.
A4
A company is using an Amazon Aurora cluster as the data store for its application.
The Aurora cluster is configured with a single DB instance.
The application performs read and write operations on the database by using the cluster's instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming maintenance window.
The cluster must remain available with the least possible interruption during the maintenance window.
What should a DevOps engineer do to meet these requirements?
Q5
The Correct Answer is:
β "Option A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster endpoint for write operations. Update the Aurora cluster's reader endpoint for reads."
β Here's why:
π΄ Now, let's examine why the other options are not the best choice:
β Option B: While this approach also adds a reader instance, using a custom ANY endpoint for both read and write operations does not explicitly segregate traffic and might not optimally utilize the reader instance during maintenance.
β Option C: Aurora automatically operates across multiple availability zones without needing to explicitly turn on a Multi-AZ option like RDS. This option doesn't directly address the requirement to maintain availability during maintenance.
β Option D: Similar to option C, this misunderstands Aurora's built-in high availability features and the use of a custom ANY endpoint does not provide the targeted benefit of separating read and write operations to maintain availability during updates.
A5
A company must encrypt all AMIs that the company shares across accounts.
A DevOps engineer has access to a source account where an unencrypted custom AMI has been built.
The DevOps engineer also has access to a target account where an Amazon EC2 Auto Scaling group will launch EC2 instances from the AMI.
The DevOps engineer must share the AMI with the target account.
The company has created an AWS Key Management Service (AWS KMS) key in the source account.
Which additional steps should the DevOps engineer perform to meet the requirements? (Choose three.)
Q6
The Correct Answers are:
β Option A: In the source account, copy the unencrypted AMI to an encrypted AMI. Specify the KMS key in the copy action.
β Option D: In the source account, modify the key policy to give the target account permissions to create a grant. In the target account, create a KMS grant that delegates permissions to the Auto Scaling group service-linked role.
β Option F: In the source account, share the encrypted AMI with the target account.
β Here's why:
π΄ Now, let's examine why the other options are not the best choice:
β Option B: Copying the AMI with the default Amazon EBS encryption key does not meet the requirement of using a specific AWS KMS key created by the company.
β Option C: Creating a KMS grant in the source account without specifying further action does not complete the process of sharing the encrypted AMI with the target account or ensuring it can be used there.
β Option E: Sharing the unencrypted AMI does not meet the company's requirement to encrypt all AMIs shared across accounts.
A6
A company uses AWS CodePipeline pipelines to automate releases of its application.
A typical pipeline consists of three stages: build, test, and deployment.
The company has been using a separate AWS CodeBuild project to run scripts for each stage.
However, the company now wants to use AWS CodeDeploy to handle the deployment stage of the pipelines.
The company has packaged the application as an RPM package and must deploy the application to a fleet of Amazon EC2 instances.
The EC2 instances are in an EC2 Auto Scaling group and are launched from a common AMI.
Which combination of steps should a DevOps engineer perform to meet these requirements? (Choose two.)
Q7
The Correct Answers are:
β Option A: Create a new version of the common AMI with the CodeDeploy agent installed. Update the IAM role of the EC2 instances to allow access to CodeDeploy.
β Option D: Create an application in CodeDeploy. Configure an in-place deployment type. Specify the Auto Scaling group as the deployment target. Update the CodePipeline pipeline to use the CodeDeploy action to deploy the application.
β Here's why:
β Necessity of CodeDeploy Agent: For AWS CodeDeploy to function, the instances it deploys to must have the CodeDeploy agent installed. This is critical for enabling the EC2 instances in the Auto Scaling group to interact with CodeDeploy for deployment operations.
β IAM Role Configuration: Updating the IAM role of the EC2 instances to include permissions for accessing CodeDeploy is crucial for authenticating and authorizing the instances to pull deployment instructions from CodeDeploy.
β Deployment Configuration in CodeDeploy: Setting up an application in CodeDeploy and configuring it for in-place deployment ensures that existing instances can be updated with the new application version directly, leveraging the Auto Scaling group to identify the deployment targets.
π΄ Now, let's examine why the other options are not the best choice:
β Option B: While creating an AppSpec file is necessary for CodeDeploy deployments, simply having the AppSpec file without integrating the deployment process into CodePipeline or ensuring the CodeDeploy agent is installed on the AMI does not meet all the requirements for automated deployment.
β Option C: Adding a step to use EC2 Image Builder to create a new AMI is an unnecessary complexity for the scenario described. The key requirement is to deploy an application packaged as an RPM, which can be done without creating a new AMI for each deployment.
β Option E: This option is redundant with option D but is less specific about using the common AMI and does not add any necessary detail that would make it a better choice over option D.
A7
A companyβs security team requires that all external Application Load Balancers (ALBs) and Amazon API Gateway APIs are associated with AWS WAF web ACLs.
The company has hundreds of AWS accounts, all of which are included in a single organization in AWS Organizations.
The company has configured AWS Config for the organization.
During an audit, the company finds some externally facing ALBs that are not associated with AWS WAF web ACLs.
Which combination of steps should a DevOps engineer take to prevent future violations? (Choose two.)
Q8
The Correct Answers are:
β Option A: Delegate AWS Firewall Manager to a security account.
β Option C: Create an AWS Firewall Manager policy to attach AWS WAF web ACLs to any newly created ALBs and API Gateway APIs.
β Here's why:
β Centralized Management: Delegating AWS Firewall Manager to a security account allows centralized management of security policies, including WAF web ACLs, across all accounts within the organization. This streamlines the process of ensuring compliance with the security team's requirements.
β Automated Policy Application: Creating an AWS Firewall Manager policy to automatically attach AWS WAF web ACLs to newly created ALBs and API Gateway APIs ensures that all new resources are immediately compliant with the companyβs security policies. This preemptively prevents the issue of resources being deployed without the necessary WAF protections.
π΄ Now, let's examine why the other options are not the best choice:
β Option B: Delegating Amazon GuardDuty to a security account, while useful for threat detection and continuous monitoring, does not directly address the requirement to associate AWS WAF web ACLs with ALBs and API Gateway APIs.
β Option D: Amazon GuardDuty does not have the capability to attach AWS WAF web ACLs to resources. GuardDuty is focused on security analysis and threat detection, not on managing WAF associations.
β Option E: While AWS Config can detect configurations that do not comply with the companyβs policies, it does not have the built-in capability to automatically attach AWS WAF web ACLs to ALBs and API Gateway APIs. AWS Config can be used to identify non-compliant resources but would require additional steps or custom automation to remediate the issues.
A8
A company uses AWS KMS with CMKs and manual key rotation to meet regulatory compliance requirements.
The security team wants to be notified when any keys have not been rotated after 90 days.
Which solution will accomplish this?
Q9
The Correct Answer is:
β Option C: Develop an AWS Config custom rule that publishes to an Amazon SNS topic when keys are more than 90 days old.
β Here's why:
β Custom Monitoring: AWS Config allows for the development of custom rules tailored to specific compliance requirements, such as monitoring the rotation of AWS KMS keys. This ensures that any keys not rotated within 90 days can be identified, meeting regulatory compliance needs.
β Notification Capability: By publishing notifications to an Amazon SNS topic, stakeholders can be immediately informed about non-compliance, enabling prompt action to rotate keys and maintain compliance.
β Automation and Scalability: AWS Config can automatically monitor and evaluate AWS resources across an entire AWS environment, providing a scalable solution to compliance monitoring.
π΄ Now, let's examine why the other options are not the best choice:
β Option A: AWS KMS does not natively support publishing notifications to an Amazon SNS topic based on the age of keys. This functionality would need to be implemented via custom monitoring or AWS Config.
β Option B: While CloudWatch Events and AWS Lambda can be used for custom monitoring solutions, relying on the Trusted Advisor API for this specific requirement might not directly support monitoring the rotation age of KMS keys. Trusted Advisor focuses on broader best practice recommendations.
β Option D: AWS Security Hub aggregates security findings but does not directly support custom monitoring for specific compliance rules like the age of KMS keys without integration with other services like AWS Config.
A9
A security review has identified that an AWS CodeBuild project is downloading a database population script from an Amazon S3 bucket using an unauthenticated request.
The Security team does not allow unauthenticated requests to S3 buckets for this project.
How can this issue be corrected in the MOST secure manner?
Q10
The Correct Answer is:
β Option C: Remove unauthenticated access from the S3 bucket with a bucket policy. Modify the service role for the CodeBuild project to include Amazon S3 access. Use the AWS CLI to download the database population script.
β Here's why:
β Security Compliance: Removing unauthenticated access and enforcing access control through IAM roles aligns with AWS best practices for securing S3 buckets, ensuring only authorized entities can access sensitive resources.
β IAM Role Configuration: Modifying the service role for CodeBuild to include Amazon S3 access provides a secure, scalable way to manage permissions, leveraging AWS's built-in security mechanisms without exposing sensitive credentials.
β Utilizing AWS CLI: Using the AWS CLI with appropriate permissions allows for secure, efficient interactions with AWS services, ensuring scripts and resources are accessed securely within AWS's ecosystem.
π΄ Now, let's examine why the other options are not the best choice:
β Option A: While specifying allowed buckets in CodeBuild project settings is a step towards securing access, it does not fully address the security concern of unauthenticated access as identified by the security team.
β Option B: S3 does not support HTTPS basic authentication with tokens in this manner. This option does not align with AWS security practices and capabilities.
β Option D: Using IAM access keys and secret access keys directly in build specifications or scripts is not recommended due to the risk of key compromise. It is more secure and manageable to use IAM roles and policies for access control.
A10