Category Archives: Amazon

Amazon CLF-C02 AWS Certified Cloud Practitioner (CLF-C02) Exam

Exam overview
Category Foundational
Exam duration 90 minutes
Exam format 65 questions; either multiple choice or multiple response
Cost 100 USD
Test in-person or online
Languages offered English, Japanese, Korean, Simplified Chinese, Traditional Chinese, Bahasa (Indonesian), Spanish (Spain), Spanish (Latin America), French (France), German, Italian, and Portuguese (Brazil)

The AWS Certified Cloud Practitioner validates foundational, high-level understanding of AWS Cloud, services, and terminology. This is a good starting point on the AWS Certification journey for individuals with no prior IT or cloud experience switching to a cloud career or for line-of-business employees looking for foundational cloud literacy.

Prepare for the exam
Below are recommended steps you can follow to get ready for exam day. Note: AWS does not require you to take any specific AWS training as part of your exam prep.

Examkingdom Amazon AWS CLF-C02 Exam pdf,

MCTS Training, MCITP Trainnig

Best Amazon AWS CLF-C02 Downloads, Amazon AWS CLF-C02 Dumps at Certkingdom.com

The exam validates a candidate’s ability to complete the following tasks:
* Explain the value of the AWS Cloud.
* Understand and explain the AWS shared responsibility model.
* Understand security best practices.
* Understand AWS Cloud costs, economics, and billing practices.
* Describe and position the core AWS services, including compute, network, database, and storage services.
* Identify AWS services for common use cases.

Recommended AWS knowledge
The target candidate should have AWS knowledge in the following areas:
* AWS Cloud concepts
* Security and compliance in the AWS Cloud
* Core AWS services
* Economics of the AWS Cloud

Content outline
This CLF-C02 exam guide includes weightings, content domains, and task statements for the exam. Refer to Appendix B for a comparison of the previous version (CLF-C01) and current version (CLF-C02) of the exam.
This guide does not provide a comprehensive list of the content on the exam. However, additional context for each task statement is available to help you prepare for the exam.
The exam has the following content domains and weightings:

* Domain 1: Cloud Concepts (24% of scored content)
* Domain 2: Security and Compliance (30% of scored content)
* Domain 3: Cloud Technology and Services (34% of scored content)
* Domain 4: Billing, Pricing, and Support (12% of scored content)

Domain 1: Cloud Concepts
Task Statement 1.1: Define the benefits of the AWS Cloud. Knowledge of:
* Value proposition of the AWS Cloud Skills in:
* Understanding the economies of scale (for example, cost savings)
* Understanding the benefits of global infrastructure (for example, speed of deployment, global reach)
* Understanding the advantages of high availability, elasticity, and agility

Task Statement 1.2: Identify design principles of the AWS Cloud. Knowledge of:
* AWS Well-Architected Framework Skills in:
* Understanding the pillars of the Well-Architected Framework (for example, operational excellence, security, reliability, performance efficiency, cost optimization, sustainability)
* Identifying differences between the pillars of the Well-Architected Framework

Task Statement 1.3: Understand the benefits of and strategies for migration to the AWS Cloud. Knowledge of:
* Cloud adoption strategies
* Resources to support the cloud migration journey
Skills in:
* Understanding the benefits of the AWS Cloud Adoption Framework (AWS CAF) (for example, reduced business risk; improved environmental, social, and governance (ESG) performance; increased revenue; increased operational efficiency)
* Identifying appropriate migration strategies (for example, database replication, use of AWS Snowball)

Task Statement 1.4: Understand concepts of cloud economics. Knowledge of:
* Aspects of cloud economics
* Cost savings of moving to the cloud
Skills in:
* Understanding the role of fixed costs compared with variable costs
* Understanding costs that are associated with on-premises environments
* Understanding the differences between licensing strategies (for example, Bring Your Own License [BYOL] model compared with included licenses)
* Understanding the concept of rightsizing
* Identifying benefits of automation (for example, provisioning and configuration management with AWS CloudFormation)
* Identifying managed AWS services (for example, Amazon RDS, Amazon Elastic Container Service [Amazon ECS], Amazon Elastic Kubernetes Service [Amazon EKS], Amazon DynamoDB)

Domain 2: Security and Compliance

Task Statement 2.1: Understand the AWS shared responsibility model. Knowledge of:
* AWS shared responsibility model
Skills in:
* Recognizing the components of the AWS shared responsibility model
* Describing the customer’s responsibilities on AWS
* Describing AWS responsibilities
* Describing responsibilities that the customer and AWS share
* Describing how AWS responsibilities and customer responsibilities can shift, depending on the service used (for example, Amazon RDS, AWS Lambda, Amazon EC2)

Task Statement 2.2: Understand AWS Cloud security, governance, and compliance concepts. Knowledge of:
* AWS compliance and governance concepts
* Benefits of cloud security (for example, encryption)
* Where to capture and locate logs that are associated with cloud security
Skills in:
* Identifying where to find AWS compliance information (for example, AWS Artifact)
* Understanding compliance needs among geographic locations or industries (for example, AWS Compliance)
* Describing how customers secure resources on AWS (for example, Amazon Inspector, AWS Security Hub, Amazon GuardDuty, AWS Shield)
* Identifying different encryption options (for example, encryption in transit, encryption at rest)
* Recognizing services that aid in governance and compliance (for example, monitoring with Amazon CloudWatch; auditing with AWS CloudTrail, AWS Audit Manager, and AWS Config; reporting with access reports)
* Recognizing compliance requirements that vary among AWS services

Task Statement 2.3: Identify AWS access management capabilities. Knowledge of:
* Identity and access management (for example, AWS Identity and Access Management [IAM])
* Importance of protecting the AWS root user account
* Principle of least privilege
* AWS IAM Identity Center (AWS Single Sign-On)
Skills in:
* Understanding access keys, password policies, and credential storage (for example, AWS Secrets Manager, AWS Systems Manager)
* Identifying authentication methods in AWS (for example, multi-factor authentication [MFA], IAM Identity Center, cross-account IAM roles)
* Defining groups, users, custom policies, and managed policies in compliance with the principle of least privilege
* Identifying tasks that only the account root user can perform
* Understanding which methods can achieve root user protection
* Understanding the types of identity management (for example, federated)

Task Statement 2.4: Identify components and resources for security. Knowledge of:
* Security capabilities that AWS provides
* Security-related documentation that AWS provides
Skills in:
* Describing AWS security features and services (for example, security groups, network ACLs, AWS WAF)
* Understanding that third-party security products are available from AWS Marketplace
* Identifying where AWS security information is available (for example, AWS Knowledge Center, AWS Security Center, AWS Security Blog)
* Understanding the use of AWS services for identifying security issues (for example, AWS Trusted Advisor)

Domain 3: Cloud Technology and Services

Task Statement 3.1: Define methods of deploying and operating in the AWS Cloud. Knowledge of:
* Different ways of provisioning and operating in the AWS Cloud
* Different ways to access AWS services
* Types of cloud deployment models
* Connectivity options
Skills in:
* Deciding between options such as programmatic access (for example, APIs, SDKs, CLI), the AWS Management Console, and infrastructure as code (IaC)
* Evaluating requirements to determine whether to use one-time operations or repeatable processes
* Identifying different deployment models (for example, cloud, hybrid, on-premises)
* Identifying connectivity options (for example, AWS VPN, AWS Direct Connect, public internet)

Task Statement 3.2: Define the AWS global infrastructure. Knowledge of:
* AWS Regions, Availability Zones, and edge locations
* High availability
* Use of multiple Regions
* Benefits of edge locations
* AWS Wavelength Zones and AWS Local Zones
Skills in:
* Describing relationships among Regions, Availability Zones, and edge locations
* Describing how to achieve high availability by using multiple Availability Zones
* Recognizing that Availability Zones do not share single points of failure
* Describing when to use multiple Regions (for example, disaster recovery, business continuity, low latency for end users, data sovereignty)
* Describing at a high level the benefits of edge locations (for example, Amazon CloudFront, AWS Global Accelerator)

Task Statement 3.3: Identify AWS compute services. Knowledge of:
* AWS compute services
Skills in:
* Recognizing the appropriate use of different EC2 instance types (for example, compute optimized, storage optimized)
* Recognizing the appropriate use of different container options (for example, Amazon ECS, Amazon EKS)
* Recognizing the appropriate use of different serverless compute options (for example, AWS Fargate, Lambda)
* Recognizing that auto scaling provides elasticity
* Identifying the purposes of load balancers

Task Statement 3.4: Identify AWS database services. Knowledge of:
* AWS database services
* Database migration
Skills in:
* Deciding when to use EC2 hosted databases or AWS managed databases
* Identifying relational databases (for example, Amazon RDS, Amazon Aurora)
* Identifying NoSQL databases (for example, DynamoDB)
* Identifying memory-based databases
* Identifying database migration tools (for example AWS Database Migration Service [AWS DMS], AWS Schema Conversion Tool [AWS SCT])

Task Statement 3.5: Identify AWS network services. Knowledge of:
* AWS network services
Skills in:
* Identifying the components of a VPC (for example, subnets, gateways)
* Understanding security in a VPC (for example, network ACLs, security groups)
* Understanding the purpose of Amazon Route 53
* Identifying edge services (for example, CloudFront, Global Accelerator)
* Identifying network connectivity options to AWS (for example AWS VPN, Direct Connect)

Task Statement 3.6: Identify AWS storage services. Knowledge of:
* AWS storage services
Skills in:
* Identifying the uses for object storage
* Recognizing the differences in Amazon S3 storage classes
* Identifying block storage solutions (for example, Amazon Elastic Block Store [Amazon EBS], instance store)
* Identifying file services (for example, Amazon Elastic File System [Amazon EFS], Amazon FSx)
* Identifying cached file systems (for example, AWS Storage Gateway)
* Understanding use cases for lifecycle policies
* Understanding use cases for AWS Backup

Task Statement 3.7: Identify AWS artificial intelligence and machine learning (AI/ML) services and analytics services. Knowledge of:
* AWS AI/ML services
* AWS analytics services
Skills in:
* Understanding the different AI/ML services and the tasks that they accomplish (for example, Amazon SageMaker, Amazon Lex, Amazon Kendra)
* Identifying the services for data analytics (for example, Amazon Athena, Amazon Kinesis, AWS Glue, Amazon QuickSight)

Task Statement 3.8: Identify services from other in-scope AWS service categories. Knowledge of:
* Application integration services of Amazon EventBridge, Amazon Simple Notification Service (Amazon SNS), and Amazon Simple Queue Service (Amazon SQS)
* Business application services of Amazon Connect and Amazon Simple Email Service (Amazon SES)
* Customer engagement services of AWS Activate for Startups, AWS IQ, AWS Managed Services (AMS), and AWS Support
* Developer tool services and capabilities of AWS AppConfig, AWS Cloud9, AWS CloudShell, AWS CodeArtifact, AWS CodeBuild, AWS CodeCommit, AWS CodeDeploy, AWS CodePipeline, AWS CodeStar, and AWS X-Ray
* End-user computing services of Amazon AppStream 2.0, Amazon WorkSpaces, and Amazon WorkSpaces Web
* Frontend web and mobile services of AWS Amplify and AWS AppSync
* IoT services of AWS IoT Core and AWS IoT Greengrass
Skills in:
* Choosing the appropriate service to deliver messages and to send alerts and notifications
* Choosing the appropriate service to meet business application needs
* Choosing the appropriate service for AWS customer support
* Choosing the appropriate option for business support assistance
* Identifying the tools to develop, deploy, and troubleshoot applications
* Identifying the services that can present the output of virtual machines (VMs) on end-user machines
* Identifying the services that can create and deploy frontend and mobile services
* Identifying the services that manage IoT devices

Domain 4: Billing, Pricing, and Support

Task Statement 4.1: Compare AWS pricing models. Knowledge of:
* Compute purchasing options (for example, On-Demand Instances, Reserved Instances, Spot Instances, Savings Plans, Dedicated Hosts, Dedicated Instances, Capacity Reservations)
* Data transfer charges
* Storage options and tiers
Skills in:
* Identifying and comparing when to use various compute purchasing options
* Describing Reserved Instance flexibility
* Describing Reserved Instance behavior in AWS Organizations
* Understanding incoming data transfer costs and outgoing data transfer costs (for example, from one Region to another Region, within the same Region)
* Understanding different pricing options for various storage options and tiers

Task Statement 4.2: Understand resources for billing, budget, and cost management. Knowledge of:
* Billing support and information
* Pricing information for AWS services
* AWS Organizations
* AWS cost allocation tags
Skills in:
* Understanding the appropriate uses and capabilities of AWS Budgets, AWS Cost Explorer, and AWS Billing Conductor
* Understanding the appropriate uses and capabilities of AWS Pricing Calculator
* Understanding AWS Organizations consolidated billing and allocation of costs
* Understanding various types of cost allocation tags and their relation to billing reports (for example, AWS Cost and Usage Report)

Task Statement 4.3: Identify AWS technical resources and AWS Support options.Knowledge of:
* Resources and documentation available on official AWS websites
* AWS Support plans
* Role of the AWS Partner Network, including independent software vendors and system integrators
* AWS Support Center
Skills in:
* Locating AWS whitepapers, blogs, and documentation on official AWS websites
* Identifying and locating AWS technical resources (for example AWS Prescriptive Guidance, AWS Knowledge Center, AWS re:Post)
* Identifying AWS Support options for AWS customers (for example, customer service and communities, AWS Developer Support, AWS Business Support, AWS Enterprise On-Ramp Support, AWS Enterprise Support)
* Identifying the role of Trusted Advisor, AWS Health Dashboard, and the AWS Health API to help manage and monitor environments for cost optimization
* Identifying the role of the AWS Trust and Safety team to report abuse of AWS resources
* Understanding the role of AWS Partners (for example AWS Marketplace, independent software vendors, system integrators)
* Identifying the benefits of being an AWS Partner (for example, partner training and certification, partner events, partner volume discounts)
* Identifying the key services that AWS Marketplace offers (for example, cost management, governance and entitlement)
* Identifying technical assistance options available at AWS (for example, AWS Professional Services, AWS Solutions Architects)

Sample Questions

QUESTION 1
Which database engine is compatible with Amazon RDS?

A. Apache Cassandra
B. MongoDB
C. Neo4j
D. PostgreSQL

Answer: D

Explanation:
Amazon RDS supports six database engines: Amazon Aurora, MySQL, MariaDB, PostgreSQL, Oracle, and SQL Server. Apache Cassandra, MongoDB, and Neo4j are not compatible with Amazon RDS. Therefore, the correct answer is D. You can learn more about Amazon RDS and its supported database engines from this page.

QUESTION 2
A company needs to run code in response to an event notification that occurs when objects are uploaded to an Amazon S3 bucket.
Which AWS service will integrate directly with the event notification?

A. AWS Lambda
B. Amazon EC2
C. Amazon Elastic Container Registry (Amazon ECR)
D. AWS Elastic Beanstalk

Answer: A

Explanation:
AWS Lambda is a service that lets you run code without provisioning or managing servers. You can use Lambda to process event notifications from Amazon S3 when objects are uploaded or deleted.
Lambda integrates directly with the event notification and invokes your code automatically. Therefore, the correct answer is A.

QUESTION 3
A company wants to centrally manage security policies and billing services within a multi-account
AWS environment. Which AWS service should the company use to meet these requirements?

A. AWS Identity and Access Management (1AM)
B. AWS Organizations
C. AWS Resource Access Manager (AWS RAM)
D. AWS Config

Answer: B

Explanation:
AWS Organizations is a service that helps you centrally manage and govern your environment as you grow and scale your AWS resources. You can use AWS Organizations to create groups of accounts and
apply policies to them. You can also use AWS Organizations to consolidate billing for multiple accounts. Therefore, the correct answer is B. You can learn more about AWS Organizations and its features from this page.

QUESTION 4
What are the characteristics of Availability Zones? (Select TWO.)

A. All Availability Zones in an AWS Region are interconnected with high-bandwidth, low-latency networking
B. Availability Zones are physically separated by a minimum of distance of 150 km (100 miles).
C. All traffic between Availability Zones is encrypted.
D. Availability Zones within an AWS Region share redundant power, networking, and connectivity.
E. Every Availability Zone contains a single data center.

Answer: A, D

Explanation:
Availability Zones are physically separate locations within an AWS Region that are engineered to be isolated from failures. Each Availability Zone has independent power, cooling, and physical security,
and is connected to other Availability Zones in the same Region by a low-latency network. Therefore, the correct answers are A and D. You can learn more about Availability Zones and their characteristics from this page.

QUESTION 5
Which AWS Well-Architected Framework concept represents a system’s ability to remain functional when the system encounters operational problems?

A. Consistency
B. Elasticity
C. Durability
D. Latency

Answer: B

Explanation:
The AWS Well-Architected Framework is a set of best practices and guidelines for designing and operating systems in the cloud. The framework consists of five pillars: operational excellence,
security, reliability, performance efficiency, and cost optimization. The concept of elasticity represents a systems ability to adapt to changes in demand by scaling resources up or down
automatically. Therefore, the correct answer is B. You can learn more about the AWS Well- Architected Framework and its pillars from this page.

QUESTION 6
Which AWS service or tool does AWS Control Tower use to create resources?

A. AWS CloudFormation
B. AWS Trusted Advisor
C. AWS Directory Service
D. AWS Cost Explorer

Answer: A

Explanation:
AWS Control Tower uses AWS CloudFormation to create resources in your landing zone. AWS CloudFormation is a service that helps you model and set up your AWS resources using templates.
AWS Control Tower supports creating AWS::ControlTower::EnabledControl resources in AWS CloudFormation. Therefore, the correct answer is A. You can learn more about AWS Control Tower and AWS CloudFormation from this page.

QUESTION 7
What are some advantages of using Amazon EC2 instances lo host applications in the AWS Cloud instead of on premises? (Select TWO.)

A. EC2 includes operating system patch management
B. EC2 integrates with Amazon VPC. AWS CloudTrail, and AWS Identity and Access Management (1AM)
C. EC2 has a 100% service level agreement (SLA).
D. EC2 has a flexible, pay-as-you-go pricing model.
E. EC2 has automatic storage cost optimization.

Answer: B, D

QUESTION 8
Which option is an advantage of AWS Cloud computing that minimizes variable costs?

A. High availability
B. Economies of scale
C. Global reach
D. Agility

Answer: B

Students Reviews / Discussion

TRIBEDY PIGUSH – Dhaka – Highly Voted 1 week,
Passed with 786. Around 75-80% of questions are from here. but with more questions.
upvoted 19 times

Dimitris Ramos – Greece – 5 Days, Highly Voted
Total how many questioned are asked in the exam?
upvoted 2 times

Qwaku Ofoe – United States- 2 Week
can you tell me which answer is correct? the one that was provided by Certkingdom or the one that was highly voted? what is the voting comment that is highlighted by yellow?
upvoted 1 times

Jose Lindo Meza – Peru- 1 months ago
Hi Certkingdom team, thanks for adding this. Much appreciated.
upvoted 11 times

Chun Ting Chan – Hong Kong – 3 days, 5 hours ago
tnx Certkingdom, passed with 884, my questions were 99% from here.
be sure to read the comments below each question
upvoted 2 times

Steven Strasburg – United States Virginia – 2 weeks, 6 days ago
Passed with 897. My questions were 100% from here. Thanks certkingdom team!
upvoted 4 times

Jonathan Ellery – Netherlands – 1 month, 1 week ago
passed tday…its all good.
upvoted 2 times

Martini Sam – Netherlands -1 months, 1 weeks ago
Passed with 897. My questions were 100% from here. Thanks Certkingdom!
upvoted 4 times

Amazon AWS Certified Big Data Specialty BDS-C00 Exam Free Downloads

Introduction
The AWS Certified Big Data – Specialty (BDS-C00) examination is intended for individuals who perform complex Big Data analyses. This exam validates an examinee’s technical skills and experience in designing and implementing AWS services to derive value from data.

It validates an examinee’s ability to:
* Implement core AWS Big Data services according to basic architectural best practices
* Design and maintain Big Data
* Leverage tools to automate Data Analysis

Examination Prerequisite

In order to take this examination, you must hold an AWS Associate Certification (AWS Certified Solutions Architect – Associate, AWS Certified Developer – Associate, or AWS Certified SysOps Administrator – Associate) or Valid AWS Cloud Practitioner Certification in good standing.

Recommended AWS Knowledge
* A minimum of 2 years’ experience using AWS technology
* AWS Security best practices
* Independently define AWS architecture and services and understand how they integrate with each other.
* Define and architect AWS big data services and explain how they fit in the data lifecycle of collection, ingestion, storage, processing, and visualization.

Examkingdom Amazon AWS Certified Big Data Specialty BDS-C00 Exam pdf,

MCTS Training, MCITP Trainnig

Best Amazon AWS Certified Big Data Specialty BDS-C00 downloads , Amazon AWS Certified Big Data Specialty BDS-C00 Dumps at Certkingdom.com

Recommended General IT Knowledge
* At least 5 years’ experience in a data analytics field
* Understand how to control access to secure data
* Understand the frameworks that underpin large scale distributed systems like Hadoop/Spark and MPP data warehouses
* Understand the tools and design plateforms that allow processing of data from multiple heterogeneous sources with difference frequencies (batch/real-time)
* Capable of designing a scalable and cost-effective architecture to process data

Exam Preparation
These training courses and materials may be helpful for examination preparation:

AWS Training (aws.amazon.com/training)
* Big Data Technology Fundamentals
* Big Data on AWS

AWS Whitepapers (aws.amazon.com/whitepapers) Kindle and .pdf
* AWS Cloud Computing Whitepapers (aws.amazon.com/whitepapers), specifically Database and Analytics
* AWS Documentation (aws.amazon.com/documentation)

Exam Content
Response Types

There are two types of questions on the examination:
* Multiple-choice: Has one correct response and three or four incorrect responses (distractors).
* Multiple-response: Has two or more correct responses out of five or more options.

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective.

Unanswered questions are scored as incorrect; there is no penalty for guessing.

Unscored Content
Your examination may include unscored items that are placed on the test to gather statistical information. These items are not identified on the form and do not affect your score.
Exam Results

The AWS Certified Big Data – Specialty (BDS-C00) examination is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals who are guided by certification industry best practices and guidelines.

Your score report contains a table of classifications of your performance at each section level. This information is designed to provide general feedback concerning your examination performance. The examination uses a compensatory scoring model, which means that you do not need to “pass” the individual sections, only the overall examination. Each section of the examination has a specific weighting, so some sections have more questions than others. The table contains general information, highlighting your strengths and weaknesses. Exercise caution when interpreting section-level feedback.

Content Outline
This exam guide includes weightings, test domains, and objectives only. It is not a comprehensive listing of the content on this examination. The table below lists the main content domains and their weightings.
Domain 1: Collection 17%
Domain 2: Storage 17%
Domain 3: Processing 17%
Domain 4: Analysis 17%
Domain 5: Visualization 12%
Domain 6: Data Security 20%

Domain 1: Collection
1.1 Determine the operational characteristics of the collection system
1.2 Select a collection system that handles the frequency of data change and type of data being ingested
1.3 Identify the properties that need to be enforced by the collection system: order, data structure, metadata, etc.
1.4 Explain the durability and availability characteristics for the collection approach

Domain 2: Storage

2.1 Determine and optimize the operational characteristics of the storage solution
2.2 Determine data access and retrieval patterns
2.3 Evaluate mechanisms for capture, update, and retrieval of catalog entries
2.4 Determine appropriate data structure and storage format

Domain 3: Processing
3.1 Identify the appropriate data processing technology for a given scenario
3.2 Determine how to design and architect the data processing solution
3.3 Determine the operational characteristics of the solution implemented

Domain 4: Analysis

4.1 Determine the tools and techniques required for analysis
4.2 Determine how to design and architect the analytical solution
4.3 Determine and optimize the operational characteristics of the Analysis

Domain 5: Visualization
5.1 Determine the appropriate techniques for delivering the results/output
5.2 Determine how to design and create the Visualization platform
5.3 Determine and optimize the operational characteristics of the Visualization system

Domain 6: Data Security
6.1 Determine encryption requirements and/or implementation technologies
6.2 Choose the appropriate technology to enforce data governance
6.3 Identify how to ensure data integrity
6.4 Evaluate regulatory requirements

QUESTION 1
A company collects temperature, humidity, and atmospheric pressure data in cities across multiple continents. The average volume of data collected per site each day is 500 GB. Each site has a highspeed
internet connection. The company’s weather forecasting applications are based in a single Region and analyze the data daily.
What is the FASTEST way to aggregate data from all of these global sites?

A. Enable Amazon S3 Transfer Acceleration on the destination bucket. Use multipart uploads to directly upload site data to the destination bucket.
B. Upload site data to an Amazon S3 bucket in the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.
C. Schedule AWS Snowball jobs daily to transfer data to the closest AWS Region. Use S3 cross-Region replication to copy objects to the destination bucket.
D. Upload the data to an Amazon EC2 instance in the closest Region. Store the data in an Amazon Elastic Block Store (Amazon EBS) volume. Once a day take an EBS snapshot and copy it to the centralized Region. Restore the EBS volume in the centralized Region and run an analysis on the data daily.

Answer: A

Explanation:
You might want to use Transfer Acceleration on a bucket for various reasons, including the following:
You have customers that upload to a centralized bucket from all over the world.
You transfer gigabytes to terabytes of data on a regular basis across continents.
You are unable to utilize all of your available bandwidth over the Internet when uploading to Amazon S3.

“Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much
as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile
applications with widespread users or applications hosted far away from their S3 bucket can
experience long and variable upload and download speeds over the Internet”
https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html
“Improved throughput – You can upload parts in parallel to improve throughput.”

QUESTION 2
A company needs the ability to analyze the log files of its proprietary application. The logs are stored in JSON format in an Amazon S3 bucket Queries will be simple and will run on-demand A solutions
architect needs to perform the analysis with minimal changes to the existing architecture
What should the solutions architect do to meet these requirements with the LEAST amount of operational overhead?

A. Use Amazon Redshift to load all the content into one place and run the SQL queries as needed
B. Use Amazon CloudWatch Logs to store the logs Run SQL queries as needed from the Amazon CloudWatch console
C. Use Amazon Athena directly with Amazon S3 to run the queries as needed
D. Use AWS Glue to catalog the logs Use a transient Apache Spark cluster on Amazon EMR to run the SQL queries as needed

Answer: C

Explanation:
Amazon Athena can be used to query JSON in S3

QUESTION 3
A company uses AWS Organizations to manage multiple AWS accounts for different departments.
The management account has an Amazon S3 bucket that contains project reports. The company
wants to limit access to this S3 bucket to only users of accounts within the organization in AWS Organizations.
Which solution meets these requirements with the LEAST amount of operational overhead?

A. Add the aws:PrincipalOrgID global condition key with a reference to the organization ID to the S3 bucket policy.
B. Create an organizational unit (OU) for each department. Add the aws:PrincipalOrgPaths global condition key to the S3 bucket policy.
C. Use AWS CloudTrail to monitor the CreateAccount, InviteAccountToOrganization, LeaveOrganization, and RemoveAccountFromOrganization events. Update the S3 bucket policy accordingly.
D. Tag each user that needs access to the S3 bucket. Add the aws:PrincipalTag global condition key to the S3 bucket policy.

Answer: A
Explanation:
The aws:PrincipalOrgID global key provides an alternative to listing all the account IDs for all AWS
accounts in an organization. For example, the following Amazon S3 bucket policy allows members of
any account in the XXX organization to add an object into the examtopics bucket.
{“Version”: “2020-09-10”,
“Statement”: {
“Sid”: “AllowPutObject”,
“Effect”: “Allow”,
“Principal”: “*”,
“Action”: “s3:PutObject”,
“Resource”: “arn:aws:s3:::examtopics/*”,
“Condition”: {“StringEquals”:
{“aws:PrincipalOrgID”:[“XXX”]}}}}

QUESTION 4
An application runs on an Amazon EC2 instance in a VPC. The application processes logs that are
stored in an Amazon S3 bucket. The EC2 instance needs to access the S3 bucket without connectivity to the internet.
Which solution will provide private network connectivity to Amazon S3?

A. Create a gateway VPC endpoint to the S3 bucket.
B. Stream the logs to Amazon CloudWatch Logs. Export the logs to the S3 bucket.
C. Create an instance profile on Amazon EC2 to allow S3 access.
D. Create an Amazon API Gateway API with a private link to access the S3 endpoint.

Answer: A

Explanation:
VPC endpoint allows you to connect to AWS services using a private network instead of using the public Internet

QUESTION 5
A company is hosting a web application on AWS using a single Amazon EC2 instance that stores useruploaded
documents in an Amazon EBS volume. For better scalability and availability, the company
duplicated the architecture and created a second EC2 instance and EBS volume in another Availability
Zone placing both behind an Application Load Balancer After completing this change, users reported
that, each time they refreshed the website, they could see one subset of their documents or the
other, but never all of the documents at the same time.
What should a solutions architect propose to ensure users see all of their documents at once?

A. Copy the data so both EBS volumes contain all the documents.
B. Configure the Application Load Balancer to direct a user to the server with the documents
C. Copy the data from both EBS volumes to Amazon EFS Modify the application to save new documents to Amazon EFS
D. Configure the Application Load Balancer to send the request to both servers Return each document from the correct server.

Answer: C

QUESTION 6
A company uses NFS to store large video files in on-premises network attached storage. Each video
file ranges in size from 1MB to 500 GB. The total storage is 70 TB and is no longer growing. The
company decides to migrate the video files to Amazon S3. The company must migrate the video files
as soon as possible while using the least possible network bandwidth.
Which solution will meet these requirements?

A. Create an S3 bucket Create an IAM role that has permissions to write to the S3 bucket. Use the AWS CLI to copy all files locally to the S3 bucket.
B. Create an AWS Snowball Edge job. Receive a Snowball Edge device on premises. Use the Snowball Edge client to transfer data to the device. Return the device so that AWS can import the data into Amazon S3.
C. Deploy an S3 File Gateway on premises. Create a public service endpoint to connect to the S3 File Gateway Create an S3 bucket Create a new NFS file share on the S3 File Gateway Point the new file
share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.

D. Set up an AWS Direct Connect connection between the on-premises network and AWS. Deploy an S3 File Gateway on premises. Create a public virtual interlace (VIF) to connect to the S3 File Gateway.
Create an S3 bucket. Create a new NFS file share on the S3 File Gateway. Point the new file share to the S3 bucket. Transfer the data from the existing NFS file share to the S3 File Gateway.

Answer: B

Explanation:
The basic difference between Snowball and Snowball Edge is the capacity they provide. Snowball
provides a total of 50 TB or 80 TB, out of which 42 TB or 72 TB is available, while Amazon Snowball
Edge provides 100 TB, out of which 83 TB is available.

QUESTION 7
A company has an application that ingests incoming messages. These messages are then quickly
consumed by dozens of other applications and microservices.
The number of messages varies drastically and sometimes spikes as high as 100,000 each second.
The company wants to decouple the solution and increase scalability.
Which solution meets these requirements?

A. Persist the messages to Amazon Kinesis Data Analytics. All the applications will read and process the messages.
B. Deploy the application on Amazon EC2 instances in an Auto Scaling group, which scales the number of EC2 instances based on CPU metrics.
C. Write the messages to Amazon Kinesis Data Streams with a single shard. All applications will read from the stream and process the messages.
D. Publish the messages to an Amazon Simple Notification Service (Amazon SNS) topic with one or more Amazon Simple Queue Service (Amazon SQS) subscriptions. All applications then process the messages from the queues.

Answer: D

Explanation:
By routing incoming requests to Amazon SQS, the company can decouple the job requests from the
processing instances. This allows them to scale the number of instances based on the size of the
queue, providing more resources when needed. Additionally, using an Auto Scaling group based on
the queue size will automatically scale the number of instances up or down depending on the
workload. Updating the software to read from the queue will allow it to process the job requests in a
more efficient manner, improving the performance of the system.

DOP-C02 AWS Certified DevOps Engineer – Professional Exam

DOP-C02 AWS Certified DevOps Engineer – Professional Exam

The AWS Certified DevOps Engineer – Professional showcases individuals’ technical expertise in provisioning, operating, and managing distributed application systems on the AWS platform, giving them increased confidence and credibility with peers, stakeholders, and customers. Organizations with these qualified professionals can ensure speedy delivery of secure, compliant, systems that are highly available and scalable.

Did you know? Job listings requiring this certification have increased by 52% between Oct 2021 and Sept 2022 (source: Lightcast™ September 2022).

Exam overview

Level: Professional
Length: 180 minutes to complete the exam
Format: 75 questions, either multiple choice or multiple response.

Recommended AWS knowledge
The target candidate has the following:
• Experience in building highly automated infrastructure
• Experience in administering operating systems
• Experience with modern development and operations processes and methodologies
• Experience in securing AWS infrastructure


Exam Topics
Domain 1: SDLC Automation 22%
Domain 2: Configuration Management and IaC 17%
Domain 3: Resilient Cloud Solutions 15%
Domain 4: Monitoring and Logging 15%
Domain 5: Incident and Event Response 14%
Domain 6: Security and Compliance 17%
TOTAL 100%

Who should take this exam?
AWS Certified DevOps Engineer – Professional is intended for individuals with two or more years of experience provisioning, operating, and managing AWS environments. Before you take this exam, we recommend you have:

Experience developing code in at least one high-level programming language; building highly automated infrastructures; and administering operating systems
Understanding of modern development and operations processes and methodologies
Ability to implement and manage continuous delivery systems and methodologies on AWS
Ability to implement and automate security controls, governance processes, and compliance validation
Ability to define and deploy monitoring, metrics, and logging systems on AWS

What does it take to earn this certification?
To earn this certification, you’ll need to take and pass the AWS Certified DevOps Engineer – Professional exam (DOP-C02). The exam features a combination of two question formats: multiple choice and multiple response. Additional information, such as the exam content outline and passing score, is in the exam guide.

Review sample questions that demonstrate the format of the questions used on this exam and include rationales for the correct answers.

Prepare for your exam
You’ve set your goal. Now it’s time to build knowledge and skills to propel your career. Check out these resources from AWS Training and Certification that are relevant to AWS Certified DevOps Engineer – Professional. We don’t require that you take any specific training before you take an exam. These recommended resources are opportunities to learn from the experts at AWS.

Additional resources – including AWS Builder Labs and game-based learning to complement your prep – are available with a subscription on AWS Skill Builder.

Select a resource type from the section below to learn more.

Examkingdom Amazon AWS DOP-C02 Exam pdf,

MCTS Training, MCITP Trainnig

Best Amazon AWS DOP-C02 Free downloads , Amazon AWS DOP-C02 Dumps at Certkingdom.com

QUESTION 1
A company has a mobile application that makes HTTP API calls to an Application Load Balancer (ALB).
The ALB routes requests to an AWS Lambda function. Many different versions of the application are
in use at any given time, including versions that are in testing by a subset of users. The version of the
application is defined in the user-agent header that is sent with all requests to the API.
After a series of recent changes to the API, the company has observed issues with the application.
The company needs to gather a metric for each API operation by response code for each version of
the application that is in use. A DevOps engineer has modified the Lambda function to extract the
API operation name, version information from the user-agent header and response code.
Which additional set of actions should the DevOps engineer take to gather the required metrics?

A. Modify the Lambda function to write the API operation name, response code, and version number
as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs metric filter that
increments a metric for each API operation name. Specify response code and application version as
dimensions for the metric.

B. Modify the Lambda function to write the API operation name, response code, and version number
as a log line to an Amazon CloudWatch Logs log group. Configure a CloudWatch Logs Insights query
to populate CloudWatch metrics from the log lines. Specify response code and application version as
dimensions for the metric.

C. Configure the ALB access logs to write to an Amazon CloudWatch Logs log group. Modify the
Lambda function to respond to the ALB with the API operation name, response code, and version
number as response metadata. Configure a CloudWatch Logs metric filter that increments a metric
for each API operation name. Specify response code and application version as dimensions for the metric.

D. Configure AWS X-Ray integration on the Lambda function. Modify the Lambda function to create
an X-Ray subsegment with the API operation name, response code, and version number. Configure XRay
insights to extract an aggregated metric for each API operation name and to publish the metric
to Amazon CloudWatch. Specify response code and application version as dimensions for the metric.

Answer: B

QUESTION 2
A company provides an application to customers. The application has an Amazon API Gateway REST
API that invokes an AWS Lambda function. On initialization, the Lambda function loads a large
amount of data from an Amazon DynamoDB table. The data load process results in long cold-start
times of 8-10 seconds. The DynamoDB table has DynamoDB Accelerator (DAX) configured.
Customers report that the application intermittently takes a long time to respond to requests. The
application receives thousands of requests throughout the day. In the middle of the day, the
application experiences 10 times more requests than at any other time of the day. Near the end of
the day, the application’s request volume decreases to 10% of its normal total.
A DevOps engineer needs to reduce the latency of the Lambda function at all times of the day.
Which solution will meet these requirements?

A. Configure provisioned concurrency on the Lambda function with a concurrency value of 1. Delete the DAX cluster for the DynamoDB table.
B. Configure reserved concurrency on the Lambda function with a concurrency value of 0.
C. Configure provisioned concurrency on the Lambda function. Configure AWS Application Auto Scaling on the Lambda function with provisioned concurrency values set to a minimum of 1 and a maximum of 100.
D. Configure reserved concurrency on the Lambda function. Configure AWS Application Auto Scaling on the API Gateway API with a reserved concurrency maximum value of 100.

Answer: C

QUESTION 3
A company is adopting AWS CodeDeploy to automate its application deployments for a Java-Apache
Tomcat application with an Apache Webserver. The development team started with a proof of
concept, created a deployment group for a developer environment, and performed functional tests
within the application. After completion, the team will create additional deployment groups for
staging and production.
The current log level is configured within the Apache settings, but the team wants to change this
configuration dynamically when the deployment occurs, so that they can set different log level
configurations depending on the deployment group without having a different application revision
for each group.
How can these requirements be met with the LEAST management overhead and without requiring
different script versions for each deployment group?

A. Tag the Amazon EC2 instances depending on the deployment group. Then place a script into the
application revision that calls the metadata service and the EC2 API to identify which deployment
group the instance is part of. Use this information to configure the log level settings. Reference the
script as part of the AfterInstall lifecycle hook in the appspec.yml file.

B. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ NAME to
identify which deployment group the instance is part of. Use this information to configure the log
level settings. Reference this script as part of the BeforeInstall lifecycle hook in the appspec.yml file.

C. Create a CodeDeploy custom environment variable for each environment. Then place a script into
the application revision that checks this environment variable to identify which deployment group
the instance is part of. Use this information to configure the log level settings. Reference this script as
part of the ValidateService lifecycle hook in the appspec.yml file.

D. Create a script that uses the CodeDeploy environment variable DEPLOYMENT_GROUP_ID to
identify which deployment group the instance is part of to configure the log level settings. Reference
this script as part of the Install lifecycle hook in the appspec.yml file.

Answer: B

QUESTION 4
A company requires its developers to tag all Amazon Elastic Block Store (Amazon EBS) volumes in an
account to indicate a desired backup frequency. This requirement Includes EBS volumes that do not
require backups. The company uses custom tags named Backup_Frequency that have values of none,
dally, or weekly that correspond to the desired backup frequency. An audit finds that developers are
occasionally not tagging the EBS volumes.
A DevOps engineer needs to ensure that all EBS volumes always have the Backup_Frequency tag so
that the company can perform backups at least weekly unless a different value is specified.
Which solution will meet these requirements?

A. Set up AWS Config in the account. Create a custom rule that returns a compliance failure for all
Amazon EC2 resources that do not have a Backup Frequency tag applied. Configure a remediation
action that uses a custom AWS Systems Manager Automation runbook to apply the
Backup_Frequency tag with a value of weekly.

B. Set up AWS Config in the account. Use a managed rule that returns a compliance failure for
EC2::Volume resources that do not have a Backup Frequency tag applied. Configure a remediation
action that uses a custom AWS Systems Manager Automation runbook to apply the
Backup_Frequency tag with a value of weekly.

C. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS
CreateVolume events. Configure a custom AWS Systems Manager Automation runbook to apply the
Backup_Frequency tag with a value of weekly. Specify the runbook as the target of the rule.

D. Turn on AWS CloudTrail in the account. Create an Amazon EventBridge rule that reacts to EBS
CreateVolume events or EBS ModifyVolume events. Configure a custom AWS Systems Manager
Automation runbook to apply the Backup_Frequency tag with a value of weekly. Specify the runbook
as the target of the rule.

Answer: B

QUESTION 5
A company is using an Amazon Aurora cluster as the data store for its application. The Aurora cluster
is configured with a single DB instance. The application performs read and write operations on the
database by using the cluster’s instance endpoint.
The company has scheduled an update to be applied to the cluster during an upcoming maintenance
window. The cluster must remain available with the least possible interruption during the
maintenance window.
What should a DevOps engineer do to meet these requirements?

A. Add a reader instance to the Aurora cluster. Update the application to use the Aurora cluster
endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.

B. Add a reader instance to the Aurora cluster. Create a custom ANY endpoint for the cluster. Update
the application to use the Aurora cluster’s custom ANY endpoint for read and write operations.

C. Turn on the Multi-AZ option on the Aurora cluster. Update the application to use the Aurora cluster
endpoint for write operations. Update the Aurora cluster’s reader endpoint for reads.

D. Turn on the Multi-AZ option on the Aurora cluster. Create a custom ANY endpoint for the cluster.
Update the application to use the Aurora cluster’s custom ANY endpoint for read and write
operations.

Answer: B

ANS-C01 Amazon AWS Certified Advanced Networking – Specialty Exam

The AWS Certified Advanced Networking – Specialty (ANS-C01) exam is designed for individuals who perform complex networking tasks and have at least five years of hands-on experience in network architecture, design, and implementation. This exam validates advanced technical skills and experience in designing and implementing AWS and hybrid IT network architectures at scale.

The ANS-C01 exam consists of 65 multiple-choice and multiple-answer questions and must be completed within 170 minutes. The exam is available in English, Japanese, Korean, and Simplified Chinese and can be taken at a testing center or via online proctoring. The cost of the exam is $300.

The exam covers a variety of topics, including designing and implementing hybrid IT network architectures, advanced networking concepts and technologies, automation, and security. It also covers AWS-specific topics, such as AWS networking services, AWS Direct Connect, and Amazon VPC.

To prepare for the exam, candidates should have a strong understanding of networking concepts and technologies, as well as experience working with AWS networking services. AWS recommends that candidates have achieved the AWS Certified Solutions Architect – Associate or AWS Certified Developer – Associate certification before taking the ANS-C01 exam.

Candidates can also take advantage of AWS training courses, practice exams, and other resources to prepare for the exam. Upon passing the ANS-C01 exam, candidates will earn the AWS Certified Advanced Networking – Specialty certification.

Introduction
The AWS Certified Advanced Networking – Specialty (ANS-C01) exam is intended for individuals who perform an AWS networking specialist’s role. The exam validates a candidate’s ability to design, implement, manage, and secure AWS and hybrid network architectures at scale.

The exam is for individuals who perform complex networking tasks, and validates an individual’s ability to perform the following tasks:
• Design and develop hybrid and cloud-based networking solutions by using AWS
• Implement core AWS networking services according to AWS best practices
• Operate and maintain hybrid and cloud-based network architecture for all AWS services
• Use tools to deploy and automate hybrid and cloud-based AWS networking tasks
• Implement secure AWS networks using AWS native networking constructs and services

Target candidate description
The target candidate possesses the knowledge, skills, experience, and competence to design, implement, and operate complex AWS and hybrid networking architectures. The target candidate is expected to have 5 or more years of networking experience with 2 or more years of cloud and hybrid networking experience.
Recommended AWS knowledge

The target candidate should have the following knowledge:
• AWS networking nuances and how they relate to the integration of AWS services
• AWS security best practices
• AWS compute and storage options and their underlying consistency models
To view a detailed list of specific tools and technologies that might be covered on the exam, as well as lists of in-scope and out-of-scope AWS services, refer to the Appendix.

Examkingdom ANS-C01 Amazon  exam objectives Exam pdf,

MCTS Training, MCITP Trainnig

Best ANS-C01 Amazon Free downloads , ANS-C01 Amazon Dumps at Certkingdom.com

Exam content
Response types
There are two types of questions on the exam:
• Multiple choice: Has one correct response and three incorrect responses (distractors)
• Multiple response: Has two or more correct responses out of five or more response options

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible
responses that match the content area.

Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results

The AWS Certified Advanced Networking – Specialty (ANS-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 750. Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.

Your score report may contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.

Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and task statements for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the task statements is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content. Domain % of Exam

Domain 1: Network Design 30%
Domain 2: Network Implementation 26%
Domain 3: Network Management and Operation 20%
Domain 4: Network Security, Compliance, and Governance 24%
TOTAL 100%

 

QUESTION 1
A company is planning to create a service that requires encryption in transit. The traffic must not be
decrypted between the client and the backend of the service. The company will implement the
service by using the gRPC protocol over TCP port 443. The service will scale up to thousands of
simultaneous connections. The backend of the service will be hosted on an Amazon Elastic
Kubernetes Service (Amazon EKS) duster with the Kubernetes Cluster Autoscaler and the Horizontal
Pod Autoscaler configured. The company needs to use mutual TLS for two-way authentication
between the client and the backend.
Which solution will meet these requirements?

A. Install the AWS Load Balancer Controller for Kubernetes. Using that controller, configure a
Network Load Balancer with a TCP listener on port 443 to forward traffic to the IP addresses of the
backend service Pods.

B. Install the AWS Load Balancer Controller for Kubernetes. Using that controller, configure an
Application Load Balancer with an HTTPS listener on port 443 to forward traffic to the IP addresses of
the backend service Pods.

C. Create a target group. Add the EKS managed node group’s Auto Scaling group as a target Create an
Application Load Balancer with an HTTPS listener on port 443 to forward traffic to the target group.

D. Create a target group. Add the EKS managed node group’s Auto Scaling group as a target. Create a
Network Load Balancer with a TLS listener on port 443 to forward traffic to the target group.

Answer: B

QUESTION 2
A company is deploying a new application in the AWS Cloud. The company wants a highly available
web server that will sit behind an Elastic Load Balancer. The load balancer will route requests to
multiple target groups based on the URL in the request. All traffic must use HTTPS. TLS processing
must be offloaded to the load balancer. The web server must know the user’s IP address so that the
company can keep accurate logs for security purposes.
Which solution will meet these requirements?

A. Deploy an Application Load Balancer with an HTTPS listener. Use path-based routing rules to
forward the traffic to the correct target group. Include the X-Forwarded-For request header with
traffic to the targets.

B. Deploy an Application Load Balancer with an HTTPS listener for each domain. Use host-based
routing rules to forward the traffic to the correct target group for each domain. Include the XForwarded-
For request header with traffic to the targets.

C. Deploy a Network Load Balancer with a TLS listener. Use path-based routing rules to forward the
traffic to the correct target group. Configure client IP address preservation for traffic to the targets.

D. Deploy a Network Load Balancer with a TLS listener for each domain. Use host-based routing rules
to forward the traffic to the correct target group for each domain. Configure client IP address
preservation for traffic to the targets.

Answer: A

Explanation:
An Application Load Balancer (ALB) can be used to route traffic to multiple target groups based on
the URL in the request. The ALB can be configured with an HTTPS listener to ensure all traffic uses
HTTPS. TLS processing can be offloaded to the ALB, which reduces the load on the web server. Pathbased
routing rules can be used to route traffic to the correct target group based on the URL in the
request. The X-Forwarded-For request header can be included with traffic to the targets, which will
allow the web server to know the user’s IP address and keep accurate logs for security purposes.

QUESTION 3
A company has developed an application on AWS that will track inventory levels of vending machines
and initiate the restocking process automatically. The company plans to integrate this application
with vending machines and deploy the vending machines in several markets around the world. The
application resides in a VPC in the us-east-1 Region. The application consists of an Amazon Elastic
Container Service (Amazon ECS) cluster behind an Application Load Balancer (ALB). The
communication from the vending machines to the application happens over HTTPS.
The company is planning to use an AWS Global Accelerator accelerator and configure static IP
addresses of the accelerator in the vending machines for application endpoint access. The
application must be accessible only through the accelerator and not through a direct connection over
the internet to the ALB endpoint.
Which solution will meet these requirements?

A. Configure the ALB in a private subnet of the VPC. Attach an internet gateway without adding
routes in the subnet route tables to point to the internet gateway. Configure the accelerator with
endpoint groups that include the ALB endpoint. Configure the ALB’s security group to only allow
inbound traffic from the internet on the ALB listener port.

B. Configure the ALB in a private subnet of the VPC. Configure the accelerator with endpoint groups
that include the ALB endpoint. Configure the ALB’s security group to only allow inbound traffic from
the internet on the ALB listener port.

C. Configure the ALB in a public subnet of the VPAttach an internet gateway. Add routes in the subnet
route tables to point to the internet gateway. Configure the accelerator with endpoint groups that
include the ALB endpoint. Configure the ALB’s security group to only allow inbound traffic from the
accelerator’s IP addresses on the ALB listener port.

D. Configure the ALB in a private subnet of the VPC. Attach an internet gateway. Add routes in the
subnet route tables to point to the internet gateway. Configure the accelerator with endpoint groups
that include the ALB endpoint. Configure the ALB’s security group to only allow inbound traffic from
the accelerator’s IP addresses on the ALB listener port.

Answer: A

QUESTION 4
A global delivery company is modernizing its fleet management system. The company has several
business units. Each business unit designs and maintains applications that are hosted in its own AWS
account in separate application VPCs in the same AWS Region. Each business unit’s applications are
designed to get data from a central shared services VPC.
The company wants the network connectivity architecture to provide granular security controls. The
architecture also must be able to scale as more business units consume data from the central shared
services VPC in the future.
Which solution will meet these requirements in the MOST secure manner?

A. Create a central transit gateway. Create a VPC attachment to each application VPC. Provide full
mesh connectivity between all the VPCs by using the transit gateway.

B. Create VPC peering connections between the central shared services VPC and each application
VPC in each business unit’s AWS account.

C. Create VPC endpoint services powered by AWS PrivateLink in the central shared services VPCreate
VPC endpoints in each application VPC.

D. Create a central transit VPC with a VPN appliance from AWS Marketplace. Create a VPN
attachment from each VPC to the transit VPC. Provide full mesh connectivity among all the VPCs.

Answer: C

QUESTION 5
A company uses a 4 Gbps AWS Direct Connect dedicated connection with a link aggregation group
(LAG) bundle to connect to five VPCs that are deployed in the us-east-1 Region. Each VPC serves a
different business unit and uses its own private VIF for connectivity to the on-premises environment.
Users are reporting slowness when they access resources that are hosted on AWS.
A network engineer finds that there are sudden increases in throughput and that the Direct Connect
connection becomes saturated at the same time for about an hour each business day. The company
wants to know which business unit is causing the sudden increase in throughput. The network
engineer must find out this information and implement a solution to resolve the problem.

Which solution will meet these requirements?

SAP-C02 Amazon AWS Certified Solutions Architect – Professional Exam Dumps

AWS Certified Solutions Architect – Professional
Learn more about this certification and AWS resources that can help you prepare

This credential helps certified individuals showcase advanced knowledge and skills in providing complex solutions to complex problems, optimizing security, cost, and performance, and automating manual processes. This certification is a means for organizations to identify and develop talent with these critical skills for implementing cloud initiatives.

Exam overview
Level: Professional
Length: 180 minutes to complete the exam
Visit Exam pricing for additional cost information.
Format: 75 questions, either multiple choice or multiple response

Who should take this exam?
AWS Certified Solutions Architect – Professional is intended for individuals with two or more years of hands-on experience designing and deploying cloud architecture on AWS. Before you take this exam, we recommend you have:

Familiarity with AWS CLI, AWS APIs, AWS CloudFormation templates, the AWS Billing Console, the AWS Management Console, a scripting language, and Windows and Linux environments
Ability to provide best practice guidance on the architectural design across multiple applications and projects of the enterprise as well as an ability to map business objectives to application/architecture requirements

Ability to evaluate cloud application requirements and make architectural recommendations for implementation, deployment, and provisioning applications on AWS
Ability to design a hybrid architecture using key AWS technologies (e.g., VPN, AWS Direct Connect) as well as a continuous integration and deployment process

Examkingdom Amazon SAP-C02 Exam pdf,

MCTS Training, MCITP Trainnig

Best Amazon SAP-C02 Free downloads , Amazon SAP-C02 Dumps at Certkingdom.com

What does it take to earn this certification?
To earn this certification, you’ll need to take and pass the AWS Certified Solutions Architect – Professional exam (SAP-C02). The exam features a combination of two question formats: multiple choice and multiple response. Additional information, such as the exam content outline and passing score, is in the exam guide.

Review sample questions that demonstrate the format of the questions used on this exam and include rationales for the correct answers.

Prepare for your exam
You’ve set your goal. Now it’s time to build knowledge and skills to propel your career. Check out these resources from AWS Training and Certification that are relevant to AWS Certified Solutions Architect – Professional. We don’t require that you take any specific training before you take an exam. These recommended resources are opportunities to learn from the experts at AWS.

Additional resources – including AWS Builder Labs and game-based learning to complement your prep – are available with a subscription on AWS Skill Builder.

Introduction
The AWS Certified Solutions Architect – Professional (SAP-C02) exam is intended for individuals who perform a solutions architect role. The exam validates a candidate’s advanced technical skills and experience in designing optimized AWS solutions that are based on the AWS Well-Architected Framework.

The exam also validates a candidate’s ability to complete the following tasks within the scope of the AWS Well-Architected Framework:
• Design for organizational complexity
• Design for new solutions
• Continuously improve existing solutions
• Accelerate workload migration and modernization

Target candidate description
The target candidate has 2 or more years of experience in using AWS services to design and implement cloud solutions. This candidate has the ability to evaluate cloud application requirements and make architectural recommendations for deployment of applications on AWS. The target candidate also can provide expert guidance about architectural design that extends across multiple applications and projects within a complex organization.
Exam content
Response types

There are two types of questions on the exam:

• Multiple choice: Has one correct response and three incorrect responses (distractors)
• Multiple response: Has two or more correct responses out of five or more response options

Select one or more responses that best completes the statement or answers the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area.
Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 65 questions that will affect your score.

Unscored content
The exam includes 10 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Solutions Architect – Professional (SAP-C02) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.

Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 750. Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.
Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback. Candidates who pass the exam will not receive this additional information.
Content outline
This exam guide includes weightings, test domains, and task statements for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the task statements is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.

Domain % of Exam
Domain 1: Design Solutions for Organizational Complexity 26%
Domain 2: Design for New Solutions 29%
Domain 3: Continuous Improvement for Existing Solutions 25%
Domain 4: Accelerate Workload Migration and Modernization 20%
TOTAL 100%

QUESTION 1
A company is providing weather data over a REST-based API to several customers. The API is hosted by Amazon API Gateway and is integrated with different AWS Lambda functions for each API operation. The company uses Amazon Route 53 for DNS and has created a resource record of weather.example.com. The company stores data for the API in Amazon DynamoDB tables. The company needs a solution that will give the API the ability to fail over to a different AWS Region.
Which solution will meet these requirements?

A. Deploy a new set of Lambda functions in a new Region. Update the API Gateway API to use an edge-optimized API endpoint with Lambda functions from both Regions as targets. Convert the
DynamoDB tables to global tables.

B. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.

C. Deploy a new API Gateway API and Lambda functions in another Region. Change the Route 53 DNS record to a failover record. Enable target health monitoring. Convert the DynamoDB tables to global tables.

D. Deploy a new API Gateway API in a new Region. Change the Lambda functions to global functions. Change the Route 53 DNS record to a multivalue answer. Add both API Gateway APIs to the answer. Enable target health monitoring. Convert the DynamoDB tables to global tables.

Answer: C

QUESTION 2
A company is running a two-tier web-based application in an on-premises data center. The application layer consists of a single server running a stateful application. The application connects to
a PostgreSQL database running on a separate server. The application’s user base is expected to grow significantly, so the company is migrating the application and database to AWS. The solution will use
Amazon Aurora PostgreSQL, Amazon EC2 Auto Scaling, and Elastic Load Balancing.
Which solution will provide a consistent user experience that will allow the application and database  tiers to scale?

A. Enable Aurora Auto Scaling for Aurora Replicas. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.

B. Enable Aurora Auto Scaling for Aurora writers. Use an Application Load Balancer with the round robin routing algorithm and sticky sessions enabled.

C. Enable Aurora Auto Scaling for Aurora Replicas. Use an Application Load Balancer with the round robin routing and sticky sessions enabled.

D. Enable Aurora Scaling for Aurora writers. Use a Network Load Balancer with the least outstanding requests routing algorithm and sticky sessions enabled.

Answer: C

Aurora Auto Scaling enables your Aurora DB cluster to handle sudden increases in connectivity or workload. When the connectivity or workload decreases, Aurora Auto Scaling removes unnecessary Aurora Replicas so that you don’t pay for unused provisioned DB instances

QUESTION 3
A company uses a service to collect metadata from applications that the company hosts on premises. Consumer devices such as TVs and internet radios access the applications. Many older devices do not support certain HTTP headers and exhibit errors when these headers are present in responses. The company has configured an on-premises load balancer to remove the unsupported headers from responses sent to older devices, which the company identified by the User-Agent headers. The company wants to migrate the service to AWS, adopt serverless technologies, and retain the ability to support the older devices. The company has already migrated the applications into a set of AWS Lambda functions.
Which solution will meet these requirements?

A. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a CloudFront function to remove the problematic headers based on the value of the User-Agent header.

B. Create an Amazon API Gateway REST API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Modify the default gateway responses to remove the problematic headers based on the value of the User-Agent header.

C. Create an Amazon API Gateway HTTP API for the metadata service. Configure API Gateway to invoke the correct Lambda function for each type of request. Create a response mapping template to remove the problematic headers based on the value of the User-Agent. Associate the response data mapping with the HTTP API.

D. Create an Amazon CloudFront distribution for the metadata service. Create an Application Load Balancer (ALB). Configure the CloudFront distribution to forward requests to the ALB. Configure the ALB to invoke the correct Lambda function for each type of request. Create a Lambda@Edge function that will remove the problematic headers in response to viewer requests based on the value of the User-Agent header.
 
Answer: D

QUESTION 4
A company is running a traditional web application on Amazon EC2 instances. The company needs to refactor the application as microservices that run on containers. Separate versions of the application exist in two distinct environments: production and testing. Load for the application is variable, but the minimum load and the maximum load are known. A solutions architect needs to design the updated application with a serverless architecture that minimizes operational complexity.
Which solution will meet these requirements MOST cost-effectively?

A. Upload the container images to AWS Lambda as functions. Configure a concurrency limit for the associated Lambda functions to handle the expected peak load. Configure two separate Lambda integrations within Amazon API Gateway: one for production and one for testing.

B. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Container Service (Amazon ECS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the ECS clusters.

C. Upload the container images to Amazon Elastic Container Registry (Amazon ECR). Configure two auto scaled Amazon Elastic Kubernetes Service (Amazon EKS) clusters with the Fargate launch type to handle the expected load. Deploy tasks from the ECR images. Configure two separate Application Load Balancers to direct traffic to the EKS clusters.

D. Upload the container images to AWS Elastic Beanstalk. In Elastic Beanstalk, create separate environments and deployments for production and testing. Configure two separate Application LoadBalancers to direct traffic to the Elastic Beanstalk deployments.

Answer: D

Explanation:
minimizes operational + microservices that run on containers = AWS Elastic Beanstalk

QUESTION 5
A company has a multi-tier web application that runs on a fleet of Amazon EC2 instances behind an Application Load Balancer (ALB). The instances are in an Auto Scaling group. The ALB and the Auto Scaling group are replicated in a backup AWS Region. The minimum value and the maximum value for the Auto Scaling group are set to zero. An Amazon RDS Multi-AZ DB instance stores the application’s data. The DB instance has a read replica in the backup Region. The application presents an endpoint to end users by using an Amazon Route 53 record. The company needs to reduce its RTO to less than 15 minutes by giving the application the ability to automatically fail over to the backup Region. The company does not have a large enough budget for an active-active strategy. What should a solutions architect recommend to meet these requirements?
 

A. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.

B. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Configure Route 53 with a health check that monitors the web application and sends an Amazon Simple Notification Service (Amazon SNS) notification to the Lambda function when the health check status is unhealthy. Update the application’s Route 53 record with a failover policy that routes traffic to the ALB in the backup Region when a health check failure occurs.

C. Configure the Auto Scaling group in the backup Region to have the same values as the Auto Scaling group in the primary Region. Reconfigure the application’s Route 53 record with a latency-based routing policy that load balances traffic between the two ALBs. Remove the read replica. Replace the read replica with a standalone RDS DB instance. Configure Cross-Region Replication between the RDS DB instances by using snapshots and Amazon S3.

D. Configure an endpoint in AWS Global Accelerator with the two ALBs as equal weighted targets. Create an AWS Lambda function in the backup Region to promote the read replica and modify the Auto Scaling group values. Create an Amazon CloudWatch alarm that is based on the HTTPCode_Target_5XX_Count metric for the ALB in the primary Region. Configure the CloudWatch alarm to invoke the Lambda function.

Answer: B

PAS-C01 Amazon AWS Certified: SAP on AWS – Specialty Exam

Introduction
The AWS Certified: SAP on AWS – Specialty examination validates advanced technical skills and experience to design, implement, migrate, and operate SAP workloads optimally on AWS.

This examination validates an individual’s ability to perform the following tasks:
 Design an SAP solution that runs in the AWS Cloud in adherence with the AWS Well-Architected Framework
 Design an SAP solution that runs in the AWS Cloud in adherence with SAP certification and support requirements
 Implement new SAP workloads on AWS
 Migrate existing SAP workloads to AWS
 Operate SAP workloads on AWS

Target candidate description
The target candidate possesses the knowledge, skills, experience, and competence to design, implement, migrate, and operate SAP workloads on AWS.
This person likely has 5 or more years of SAP experience and 1 or more years of experience in working with SAP on AWS.

Recommended AWS knowledge:
 High availability/disaster recovery
 Core AWS infrastructure services
 AWS migration tools
 AWS global infrastructure
 Security best practices
 Multi-account scenarios and multi-Region scenarios
 Operations and management services and tools
 AWS transfer services Recommended SAP knowledge and other IT knowledge:
 SAP Basis and SAP NetWeaver administration
 SAP supported databases (including SAP HANA)
 SAP supported operating systems (Linux and Windows)
 SAP migration and installation tools
 Sizing
 Identity management

response:
Has two or more correct responses out of five or more response options Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose.
Distractors are generally plausible responses that match the content area.

Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified: SAP on AWS – Specialty (PAS-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.

Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 750.
Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.

Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.

Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and task statements for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the task statements is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.

Examkingdom Amazon PAS-C01 Exam pdf, Certkingdom Amazon PAS-C01 Exam PDF

MCTS Training, MCITP Trainnig

Best Amazon PAS-C01 Certification, Amazon PAS-C01 Q&A PDF Training at Certkingdom.com

Domain 1: Design of SAP workloads on AWS 30%
Domain 2: Implementation of SAP workloads on AWS 24%
Domain 3: Migration of SAP workloads to AWS 26%
Domain 4: Operation and maintenance of SAP workloads on AWS 20%
TOTAL 100%


Domain 1: Design of SAP workloads on AWS
1.1 Design the AWS account structure and connectivity patterns for SAP workloads on AWS.

 Knowledge of:
o AWS global infrastructure
o Account strategy for SAP workloads
o VPC patterns for SAP workloads
o SAP connectivity strategies (for example, AWS Direct Connect, AWS VPN, SAProuter, SAP GUI, Amazon AppStream)

 Skills in:
o Evaluating the use of a single AWS account versus the use of multiple AWS accounts
o Evaluating the use of a single VPC versus the use of multiple VPCs, including user restrictions with VPC sharing
o Evaluating on-premises, co-location, and cloud integration connectivity options
o Defining AWS Regions and Availability Zones based on network and latency requirements
o Designing connectivity options between different AWS accounts and different VPCs

1.2 Design a secure solution for hosting SAP workloads on AWS.
 Knowledge of:
o AWS Identity and Access Management (IAM)
o Route tables, security groups, and network ACLs
o Encryption options for data at rest and data in transit
o AWS service endpoints

 Skills in:
o Defining IAM users and roles for SAP workloads on AWS
o Defining inbound and outbound network flows by using security group rules and network ACL rules
o Troubleshooting traffic flow by using AWS native tools
o Defining the encryption strategy for data at rest and data in transit
o Defining service endpoints for service integrations

1.3 Define optimized and cost-effective infrastructure solutions for SAP workloads on AWS.
 Knowledge of:
o Certified operating system releases for SAP
o Certified database release versions for SAP
o Certified block storage solutions for SAP
o Certified instance types
o Best practices to define shared storage solutions
o AWS pricing models
o SAP transport strategy

 Skills in:
o Defining SAP certified solutions on AWS based on operating system, database, and SAP combinations
o Selecting the optimal instance family for SAP workloads
o Defining instance sizing based on SAP Application Performance Standard (SAPS) performance measurements, database sizing tools, and SAP Early Watch Alert (EWA) reports
o Defining the right storage selection for SAP workloads
o Defining shared storage solutions for SAP file systems
o Evaluating AWS pricing models
o Evaluating a shared solution versus a dedicated solution for database licensing
o Evaluating dedicated hardware solution benefits (for example, SAP certified Nitro based EC2 instance)

1.4 Design highly resilient solutions for SAP workloads on AWS.
 Knowledge of:
o High-availability solution options for SAP workloads on AWS
o Disaster recovery solution options for SAP workloads on AWS

 Skills in:
o Defining the optimal architecture by considering operating system, database, and application cluster requirements
o Designing single and distributed SAP solutions on AWS infrastructure
o Designing highly available solutions for SAP systems based on availability requirements
o Designing disaster recovery solutions for SAP systems based on recovery time objective (RTO) and recovery point objective (RPO) requirements
o Defining the optimal architecture by considering scale-up and scale-out options
o Evaluating instance placement options for availability improvement

Domain 2: Implementation of SAP workloads on AWS

2.1 Deploy databases for SAP workloads on AWS.
 Knowledge of:

o Administration of operating systems (for example, Linux, Windows)
o File system layout of databases
o AWS network concepts
o Database administration and security

 Skills in:
o Installing database systems
o Installing database clients

2.2 Deploy SAP applications on AWS.
 Knowledge of:

o Administration of operating systems (for example, Linux, Windows)
o File system layout of SAP applications
o AWS network concepts
o SAP Basis and SAP security

 Skills in:
o Installing SAP applications
o Configuring SAP applications

2.3 Configure high availability for SAP workloads.
 Knowledge of:

o AWS global infrastructure
o Administration of operating systems and databases
o SAP certified high-availability solutions on AWS
o AWS networking concepts (for example, Amazon Route 53, overlay IP addresses, routing methods)
o High-availability cluster concepts

 Skills in:

o Evaluating SAP certified high-availability solutions
o Configuring a highly available cluster between ABAP SAP Central Services (ASCS) and Enqueue Replication Server (ERS) nodes
o Configuring a highly available cluster between database nodes
o Performing cluster failover tests

2.4 Configure the disaster recovery setup for SAP workloads.
 Knowledge of:

o AWS global infrastructure
o Administration of operating systems and databases
o AWS networking concepts (for example, Route 53, routing methods)
o RTO and RPO
o Disaster recovery scenarios (for example, backup and restore, pilot light, warm standby, multi-site)
o Disaster recovery solutions on AWS

 Skills in:
o Configuring disaster recovery solutions
o Configuring database replication
o Performing disaster recovery testing

2.5 Automate deployments of SAP workloads.
 Knowledge of:

o Infrastructure as code (IaC) (for example, AWS CloudFormation)
o Configuration management tools (for example, AWS Systems Manager)
o AWS Launch Wizard for SAP
o DevOps tools

 Skills in:
o Automating infrastructure deployments by using IaC
o Automating SAP application installations
o Automating SAP deployments by using Launch Wizard
o Using configuration management tools

2.6 Validate AWS infrastructure for hosting SAP workloads.
 Knowledge of:

o Administration of operating systems (for example, Linux, Windows)
o Database file system layout
o AWS network concepts
o Database administration and security
o Performance baseline for SAP

 Skills in:
o Performing tests with the HANA Hardware and Cloud Measurement Tools (HCMT)
o Reviewing instance families and sizes
o Validating dedicated hardware solution benefits (for example, SAP certified Nitro based EC2 instance)
o Performing license checks for infrastructure for SAP workloads on AWS
o Performing storage checks by using the flexible I/O tester (FIO) and the dd command
o Performing network latency tests
o Validating infrastructure (for example, Well-Architected Review, SAP OSS Notes, certified operating systems, relational database management system combinations)

Domain 3: Migration of SAP workloads to AWS
3.1 Determine the optimal migration approach for SAP workloads to AWS.
 Knowledge of:

o Homogeneous migration process and heterogeneous migration process
o Target SAP environment architecture (operating system, database, and application)
o SAP interfaces and integration
o Data migration tools
o Data transfer options
o DNS and AWS networking services (for example, Direct Connect, Site-to-Site VPN, Route 53)
o AWS storage services (for example, Amazon Elastic File System [Amazon EFS], Amazon FSx, Amazon S3)
o AWS compute services (for example, Amazon EC2)
o AWS directory services specific to SAP workloads on Windows

 Skills in:
o Creating a technical migration and cutover plan
o Determining the suitable tools and methodologies for cloud migration
o Evaluating the compatibility for target SAP environments on AWS

3.2 Perform a homogeneous migration of SAP workloads to AWS.
 Knowledge of:

o Migration process methodologies for homogeneous migration (for example, backup and restore, database replication, block level replication)
o AWS data transfer services (for example, AWS Snowball, AWS DataSync, S3 Transfer Acceleration)
o DNS and AWS networking services (for example, Direct Connect, Site-to-Site VPN, Route 53)
o AWS storage services (for example, Amazon EFS, Amazon FSx, Amazon S3)
o AWS compute services (for example, Amazon EC2)

 Skills in:
o Using SAP and database-specific migration tools (for example, backup and restore, database replication)
o Using AWS native tools for migrations (for example, AWS Server Migration Service [AWS SMS], AWS Application Migration Service [CloudEndure Migration])

3.3 Perform a heterogeneous migration of SAP workloads to AWS.
 Knowledge of:

o Migration process methodologies for heterogeneous migration (for example, SAP export/import with Software Provisioning Manager (SWPM), Software Update Manager –
Database Migration Option (SUM-DMO) with System Move, third-party vendor tools)
o AWS data transfer services (for example, Snowball, DataSync, S3 Transfer Acceleration)
o Migration from anyDB to SAP HANA
o DNS and AWS networking services (for example, Direct Connect, Site-to-Site VPN, Route 53, DNS)
o AWS storage services (for example, Amazon EFS, Amazon FSx, Amazon S3)
o AWS compute services (for example, Amazon EC2)

 Skills in:
o Using SAP and database-specific migration tools (for example, export/import, SWPM, SUM-DMO)
o Using AWS native tools for migrations (for example, AWS SMS, AWS Application Migration Service [CloudEndure Migration])

3.4 Optimize the migration of SAP workloads.
 Knowledge of:

o Migration process methodologies
o Data transfer optimization
o Network optimization
o AWS networking services (for example, Direct Connect, Site-to-Site VPN, Route 53)
o AWS storage services (for example, Amazon EFS, Amazon FSx, Amazon S3)
o AWS compute services (for example, Amazon EC2)
o AWS automation tools (for example, Launch Wizard, CloudFormation, Systems Manager)

 Skills in:
o Deploying target SAP environments in an automated way
o Fine-tuning data transfer
o Architecting for migration acceleration

Domain 4: Operation and maintenance of SAP workloads on AWS

4.1 Monitor the underlying infrastructure of SAP environments on AWS for performance, availability, and security.
 Knowledge of:

o Performance monitoring of AWS services for SAP
o Availability monitoring of AWS services for SAP

 Skills in:
o Configuring custom metrics and alarms for SAP (for example, Amazon CloudWatch)
o Configuring alarm notifications and invoking actions (for example, Amazon Simple Notification Service [Amazon SNS])
o Installing and updating AWS Data Provider for SAP
o Monitoring API calls for accounts (for example, AWS CloudTrail)
o Monitoring and invoking responses for alerts (for example, Amazon GuardDuty)

4.2 Manage the data protection of SAP applications by using AWS native services.
 Knowledge of:

o RTO and RPO
o Backup and recovery strategies for SAP databases and applications
o Protection of data at rest and data in transit (for example, data encryption)
o Network traffic logging, monitoring, threat detection, and analytics
o IAM

 Skills in:
o Configuring and managing backup and restore of SAP databases by using database native tools and AWS Backint Agent for databases
o Managing S3 Lifecycle policies
o Configuring and managing Amazon Elastic Block Store (Amazon EBS) snapshots and Amazon Machine Images (AMIs)
o Automating backup of SAP components on AWS (for example, AWS Backup, AWS Storage Gateway)
o Configuring encryption for AWS storage and backup services and tools
o Creating and managing accounts, users, groups, access policies, and roles in IAM
o Implementing detective controls (for example, CloudTrail, CloudWatch, GuardDuty)

4.3 Perform routine and proactive maintenance activities for SAP applications on AWS.
 Knowledge of:

o Patch management of different operating systems, SAP applications, and databases
o Downtime management of SAP systems on AWS
o Basics of Linux and Windows clustering
o Architectures and administration for high availability and disaster recovery

 Skills in:
o Configuring and automating patching by using Systems Manager Patch Manager
o Managing downtime schedules by using Systems Manager maintenance windows
o Defining maintenance actions by using Systems Manager documents
o Restoring data from AWS sources (for example, EBS snapshots, AMIs)
o Operating and maintaining high-availability architectures (for example, application failovers, database failovers)
o Maintaining a disaster recovery landscape and performing disaster recovery drills
o Performing operations as code (for example, CloudFormation, Systems Manager, AWS Lambda)

4.4 Review and optimize the architecture of SAP environments on AWS on a regular basis.
 Knowledge of:

o AWS cost and usage monitoring tools (for example, AWS Trusted Advisor, AWS Pricing Calculator)
o Storage configurations and utilization monitoring for SAP workloads on AWS
o EC2 instance sizing for SAP applications and databases

 Skills in:
o Performing SAP capacity planning and reconfiguring AWS services
o Monitoring cost and usage with AWS tools (for example, Cost Explorer, AWS Budgets)

Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order.

AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:

AWS services and features

Application Integration:
 Amazon Simple Notification Service (Amazon SNS)
 AWS Step Functions

Compute:
 Amazon EC2
 Amazon EC2 Auto Scaling
 AWS Lambda

Containers:
 Amazon Elastic Container Service (Amazon ECS)
 Amazon Elastic Kubernetes Service (Amazon EKS)

Cost Management:
 AWS Budgets
 AWS Cost and Usage Report
 Cost Explorer
 Reserved Instance reporting
 Savings Plans

Database:
 Amazon RDS (solution-specific and version-specific)

Developer Tools:
 AWS Cloud Development Kit (AWS CDK)
 AWS tools and SDKs

End User Computing:
 Amazon AppStream 2.0
 Amazon WorkSpaces

Management and Governance:
 AWS CLI
 AWS CloudFormation
 AWS CloudTrail
 Amazon CloudWatch
 AWS Compute Optimizer
 AWS Config
 AWS License Manager
 AWS Management Console
 AWS Personal Health Dashboard
 AWS Systems Manager
 AWS Trusted Advisor

Migration and Transfer:
 AWS Application Migration Service (CloudEndure Migration)
 AWS DataSync
 Migration Evaluator (formerly TSO Logic)
 AWS Server Migration Service (AWS SMS)
 AWS Snow Family
 AWS Transfer Family

Networking and Content Delivery:
 Amazon API Gateway
 Amazon CloudFront
 AWS Direct Connect
 AWS Global Accelerator
 Elastic Load Balancing
 AWS PrivateLink
 Amazon Route 53
 AWS Transit Gateway
 Amazon VPC

SAP on AWS Specific:
 AWS Backint Agent
 AWS Launch Wizard Security, Identity, and Compliance:
 Amazon GuardDuty
 AWS Identity and Access Management (IAM)
 AWS Key Management Service (AWS KMS)
 AWS Security Hub
 AWS Shield
 AWS Single Sign-On
 AWS WAF

Storage:
 AWS Backup
 CloudEndure Disaster Recovery
 Amazon Elastic Block Store (Amazon EBS)
 Amazon Elastic File System (Amazon EFS)
 Amazon FSx for Windows File Server
 Amazon S3
 Amazon S3 Glacier
 AWS Snow Family
 AWS Storage Gateway

Out-of-scope AWS services and features

The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content.
Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.
Out-of-scope AWS services and features include the following:

Analytics:

 All services

Augmented Reality (AR) and Virtual Reality (VR):
 Amazon Sumerian

Blockchain:
 Amazon Managed Blockchain
 Amazon Quantum Ledger Database (Amazon QLDB)

Game Tech:
 All services

Internet of Things:
 All services

Machine Learning:
 All services

Management and Governance:
 AWS Console Mobile Application
 AWS OpsWorks

Media Services:
 All services

Quantum Technologies:
 All services

Robotics:
 AWS RoboMaker

Satellite:
 AWS Ground Station

Security, Identity, and Compliance:
 AWS CloudHSM
 Amazon Macie

Serverless:
 AWS AppSync
 Amazon DynamoDB
 AWS Fargate
 Amazon Simple Queue Service (Amazon SQS)
 


QUESTION 1
A global enterprise is running SAP ERP Central Component (SAP ECC) workloads on Oracle in an onpremises environment. The enterprise plans to migrate to SAP S 4HANA on AWS.
The enterprise recently acquired two other companies One of the acquired companies is running SAP ECC on Oracle as its ERP system The other acquired company is running an ERP system that is not from SAP The enterprise wants to consolidate the three ERP systems into one ERP system on SAP S 4HANA on AWS Not all the data from the acquired companies needs to be migrated to the final ERP
system The enterprise needs to complete this migration with a solution that minimizes cost and maximizes operational efficiency.
Which solution will meet these requirements?

A. Perform a lift-and-shift migration of all the systems to AWS Migrate the ERP system that is not from SAP to SAP ECC Convert all three systems to SAP SHANA by using SAP Software Update Manager (SUM) Database Migration Option (DMO) Consolidate all three SAP S4HANA systems into a final SAP &4HANAsystem Decommission the other systems
B. Perform a lift-and-shift migration of an the systems to AWS Migrate the enterprise’s initial system to SAP HANA, and then perform a conversion to SAP SHANA Consolidate the two systems from the acquired companies with this SAP S4HANA system by using the Selective Data Transition approach with SAP Data Management and Landscape Transformation (DMLT)
C. Use SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move to re-architect the enterprise initial system to SAP S’4HANA and to change the platform to AWS Consolidate the two systems from the acquired companies with this SAP S 4HANA system by using the Selective Data Transition approach with SAP Data Management and Landscape Transformation (DMLT)
D. Use SAP Software Update Manager (SUM) Database Migration Option (DMO) with System Move to re-architect all the systems to SAP SHANA and to change the platform to AWS Consolidate all three SAP S-4HANA systems two a final SAP SHANA system Decommission the other systems

Answer: A


QUESTION 2
A global retail company is running its SAP landscape on AWS Recently the company made changes to its SAP Web Dispatcher architecture The company added an additional SAP Web Dispatcher for high availability with an Application Load Balancer (ALB) to balance the load between the two SAP Web Dispatchers When users try to access SAP through the ALB the system is reachable However the SAP backend system is showing an error message An investigation reveals that the issue is related to SAP session handling and distribution of requests . The company confirmed that the system was working as
expected with one SAP Web Dispatcher. The company replicated the configuration of that SAP Web Dispatcher to the new SAP Web Dispatcher
How can the company resolve the error?

A. Maintain persistence by using session cookies Enable session stickiness (session affinity) on the SAP Web Dispatchers by setting the wdisp/https:/esid_support parameter to True
B. Maintain persistence by using session cookies Enable session stickiness (session affinity) on the ALB
C. Turn on host-based routing on the ALB to route traffic between the SAP Web Dispatchers
D. Turn on URL-based routing on the ALB to route traffic to the application based on URL

Answer: C


QUESTION 3
A company hosts its SAP NetWeaver workload on SAP HANA m the AWS Cloud The SAP NetWeaver application is protected by a cluster solution that uses Red Hat Enterprise Linux High Availability Add- On The duster solution uses an overlay IP address to ensure that the high availability cluster is still accessible during failover scenarios.
An SAP solutions architect needs to facilitate the network connection to this overlay IP address from multiple locations These locations include more than 25 VPCs other AWS Regions and the onpremises environment The company already has set up an AWS Direct Connect connection between the on-premises environment and AWS.
What should the SAP solutions architect do to meet these requirements in the MOST scalable manner?

A. Use VPC peering between the VPCs to route traffic between them
B. Use AWS Transit Gateway to connect the VPCs and on-premises networks together
C. Use a Network Load Balancer to route connections to various targets within VPCs
D. Deploy a Direct Connect gateway to connect the Direct Connect connection over a private VIF to one or more VPCs in any accounts

Answer: D


QUESTION 4
A company is implementing SAP HANA on AWS According 10 the company’s security policy SAP backups must be encrypted Only authorized team members can have the ability to decrypt the SAP backups
What is the MOST operationally efficient solution that meets these requirements?

A. Configure AWS Backint Agent for SAP HANA to create SAP backups in an Amazon S3 bucket After a backup is created encrypt the backup by using client-side encryption Share the encryption key with authorized team members only
B. Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups Create a key policy to grant decryption permission to authorized team members only
C. Configure AWS Storage Gateway to transfer SAP backups from a file system to an Amazon S3 bucket Use an S3 bucket policy to grant decryption permission to authorized team members only
D. Configure AWS Backint Agent for SAP HANA to use AWS Key Management Service (AWS KMS) for SAP backups Grant object ACL decryption permission to authorized team members only

Answer: C


QUESTION 5
A data analysis company has two SAP landscapes that consist of sandbox development QA, preproduction and production servers. One landscape is on Windows and the other landscape is on Red Hat Enterprise Linux. The servers reside in a room m a building that other tenants share.
An SAP solutions architect proposes to migrate the SAP applications to AWS The SAP solutions architect wants to move the production backups to AWS and wants to make the backups highly available to restore >n case of unavailability of an on-premises server.
Which solution will meet these requirements MOST cost-effectively?

A. Take a backup of the production servers Implement an AWS Storage Gateway Volume Gateway Create file shares by using the Storage Gateway Volume Gateway Copy the backup files to the file shares through NFS and 9MB.
B. Take a backup of the production servers Send those backups to tape drives implement an AWS Storage Gateway Tape Gateway Send the backups to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) through the S3 console Move the backups immediately to S3 Glacier Deep Archive
C. Implement a third-party tool to take images of the SAP application servers and database server Take regular snapshots at 1-hour intervals send the snapshots to Amazon S3 Glacier directly through the S3 Glacier console Store the same images in different S3 buckets in different AWS Regions
D. Take a backup of the production servers Implement an Amazon S3 File Gateway Create file shares by using the S3 File Gateway Copy the backup files lo the file shares through NFS and SMB Map backup files directly to Amazon S3 Configure an S3 Lifecycle policy to send the backup files to S3 Glacier based on the company’s data retention policy

Answer: C

AWS Certified Security Specialty (SCS-C01) Exam

This credential helps organizations identify and develop talent with critical skills for implementing cloud initiatives. Earning AWS Certified Security – Specialty validates expertise in securing data and workloads in the AWS Cloud.

Exam overview
Level: Specialty
Length: 170 minutes to complete the exam
Cost: 300 USD
Visit Exam pricing for additional cost information.

Format: 65 questions; either multiple choice or multiple response
Delivery method: Pearson VUE and PSI; testing center or online proctored exam

Who should take this exam?

AWS Certified Security – Specialty is intended for individuals who perform a security role and have at least two years of hands-on experience securing AWS workloads. Before you take this exam, we recommend you have:

Five years of IT security experience in designing and implementing security solutions and at least two years of hands-on experience in securing AWS workloads
Working knowledge of AWS security services and features of services to provide a secure production environment and an understanding of security operations and risks
Knowledge of the AWS shared responsibility model and its application; security controls for workloads on AWS; logging and monitoring strategies; cloud security threat models; patch management and security automation; ways to enhance AWS security services with third-party tools and services; and disaster recovery controls, including BCP and backups, encryption, access control, and data retention
Understanding of specialized data classifications and AWS data protection mechanisms, data-encryption methods and AWS mechanisms to implement them, and secure internet protocols and AWS mechanisms to implement them
Ability to make tradeoff decisions with regard to cost, security, and deployment complexity to meet a set of application requirements

What does it take to earn this certification?
To earn this certification, you’ll need to take and pass the AWS Certified Security – Specialty exam (SCS-C01). The exam features a combination of two question formats: multiple choice and multiple response. Additional information, such as the exam content outline and passing score, is in the exam guide.

Introduction
The AWS Certified Security – Specialty (SCS-C01) exam is intended for individuals who perform a security role. The exam validates a candidate’s ability to effectively demonstrate knowledge about securing the AWS platform.

The exam also validates whether a candidate has the following:
 An understanding of specialized data classifications and AWS data protection mechanisms
 An understanding of data-encryption methods and AWS mechanisms to implement them
 An understanding of secure internet protocols and AWS mechanisms to implement them
 A working knowledge of AWS security services and features of services to provide a secure production environment
 Competency from 2 or more years of production deployment experience in using AWS security services and features
 The ability to make tradeoff decisions with regard to cost, security, and deployment complexity to meet a set of application requirements
 An understanding of security operations and risks

Target candidate description
The target candidate should have 5 years of IT security experience in designing and implementing security solutions. Additionally, the target candidate should have 2 or more years of hands-on experience in securing AWS workloads.

Recommended AWS knowledge

The target candidate should have the following knowledge:
 The AWS shared responsibility model and its application
 Security controls for workloads on AWS
 Logging and monitoring strategies
 Cloud security threat models
 Patch management and security automation
 Ways to enhance AWS security services with third-party tools and services
 Disaster recovery controls, including BCP and backups
 Encryption
 Access control
 Data retention

What is considered out of scope for the target candidate?
The following is a non-exhaustive list of related job tasks that the target candidate is not expected to be able to perform. These items are considered out of scope for the exam:

 Create or write configurations
 Implement (SysOps)
 Demonstrate scripting in a specific language (for example, Perl or Java)

For a detailed list of specific tools and technologies that might be covered on the exam, as well as lists of in-scope and out-of-scope AWS services, refer to the Appendix.

Exam content
Response types

There are two types of questions on the exam:
 Multiple choice: Has one correct response and three incorrect responses (distractors)
 Multiple response: Has two or more correct responses out of five or more response options

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area.

Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.
Exam results

The AWS Certified Security – Specialty (SCS-C01) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.

Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 750. Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.

Your score report could contain a table of classifications of your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, and objectives for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the objectives is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content. Domain % of Exam

Domain 1: Incident Response 12%
Domain 2: Logging and Monitoring 20%
Domain 3: Infrastructure Security 26%
Domain 4: Identity and Access Management 20%
Domain 5: Data Protection 22%
TOTAL 100%

Domain 1: Incident Response
1.1 Given an AWS abuse notice, evaluate the suspected compromised instance or exposed access keys.
 Given an AWS Abuse report about an EC2 instance, securely isolate the instance as part of a forensic investigation.
 Analyze logs relevant to a reported instance to verify a breach, and collect relevant data.
 Capture a memory dump from a suspected instance for later deep analysis or for legal compliance reasons.

1.2 Verify that the Incident Response plan includes relevant AWS services.
 Determine if changes to baseline security configuration have been made.
 Determine if list omits services, processes, or procedures which facilitate Incident Response.
 Recommend services, processes, procedures to remediate gaps.

1.3 Evaluate the configuration of automated alerting, and execute possible remediation of security-related incidents and emerging issues.
 Automate evaluation of conformance with rules for new/changed/removed resources.
 Apply rule-based alerts for common infrastructure misconfigurations.
 Review previous security incidents and recommend improvements to existing systems.

Domain 2: Logging and Monitoring

2.1 Design and implement security monitoring and alerting.
 Analyze architecture and identify monitoring requirements and sources for monitoring statistics.
 Analyze architecture to determine which AWS services can be used to automate monitoring and alerting.
 Analyze the requirements for custom application monitoring, and determine how this could be achieved.
 Set up automated tools/scripts to perform regular audits.

Given a custom application which is not reporting its statistics, analyze the configuration and remediate.
 Review audit trails of system and user activity.

2.3 Design and implement a logging solution.
 Analyze architecture and identify logging requirements and sources for log ingestion.
 Analyze requirements and implement durable and secure log storage according to AWS best practices.
 Analyze architecture to determine which AWS services can be used to automate log ingestion and analysis.

2.4 Troubleshoot logging solutions.
 Given the absence of logs, determine the incorrect configuration and define remediation steps.
 Analyze logging access permissions to determine incorrect configuration and define remediation steps.
 Based on the security policy requirements, determine the correct log level, type, and sources.

Domain 3: Infrastructure Security

3.1 Design edge security on AWS.
 For a given workload, assess and limit the attack surface.
 Reduce blast radius (e.g. by distributing applications across accounts and regions).
 Choose appropriate AWS and/or third-party edge services such as WAF, CloudFront and Route 53 to protect against DDoS or filter application-level attacks.
 Given a set of edge protection requirements for an application, evaluate the mechanisms to prevent and detect intrusions for compliance and recommend required changes.
 Test WAF rules to ensure they block malicious traffic.

3.2 Design and implement a secure network infrastructure.
 Disable any unnecessary network ports and protocols.
 Given a set of edge protection requirements, evaluate the security groups and NACLs of an application for compliance and recommend required changes.
 Given security requirements, decide on network segmentation (e.g. security groups and NACLs) that allow the minimum ingress/egress access required.
 Determine the use case for VPN or Direct Connect.
 Determine the use case for enabling VPC Flow Logs.
 Given a description of the network infrastructure for a VPC, analyze the use of subnets and gateways for secure operation.

3.3 Troubleshoot a secure network infrastructure.
 Determine where network traffic flow is being denied.
 Given a configuration, confirm security groups and NACLs have been implemented correctly.

3.4 Design and implement host-based security.

 Given security requirements, install and configure host-based protections including Inspector, SSM.
 Decide when to use host-based firewall like iptables.
 Recommend methods for host hardening and monitoring.

Domain 4: Identity and Access Management
4.1 Design and implement a scalable authorization and authentication system to access AWS resources.
 Given a description of a workload, analyze the access control configuration for AWS services and make recommendations that reduce risk.
 Given a description how an organization manages their AWS accounts, verify security of their root user.
 Given your organization’s compliance requirements, determine when to apply user policies and resource policies.
 Within an organization’s policy, determine when to federate a directory services to IAM.
 Design a scalable authorization model that includes users, groups, roles, and policies.
 Identify and restrict individual users of data and AWS resources.
 Review policies to establish that users/systems are restricted from performing functions beyond their responsibility, and also enforce proper separation of duties.

4.2 Troubleshoot an authorization and authentication system to access AWS resources.
 Investigate a user’s inability to access S3 bucket contents.
 Investigate a user’s inability to switch roles to a different account.
 Investigate an Amazon EC2 instance’s inability to access a given AWS resource.

Domain 5: Data Protection

5.1 Design and implement key management and use.
 Analyze a given scenario to determine an appropriate key management solution.
 Given a set of data protection requirements, evaluate key usage and recommend required changes.
 Determine and control the blast radius of a key compromise event and design a solution to contain the same.

5.2 Troubleshoot key management.
 Break down the difference between a KMS key grant and IAM policy.
 Deduce the precedence given different conflicting policies for a given key.
 Determine when and how to revoke permissions for a user or service in the event of a compromise.

5.3 Design and implement a data encryption solution for data at rest and data in transit.
 Given a set of data protection requirements, evaluate the security of the data at rest in a workload and recommend required changes.
 Verify policy on a key such that it can only be used by specific AWS services.
 Distinguish the compliance state of data through tag-based data classifications and automate remediation.
 Evaluate a number of transport encryption techniques and select the appropriate method (i.e. TLS, IPsec, client-side KMS encryption). are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:
 AWS CLI
 AWS SDK
 AWS Management Console
 Network analysis tools (packet capture and flow captures)
 SSH/RDP
 Signature Version 4
 TLS
 Certificate management
 Infrastructure as code (IaC)

AWS services and features
Note: Security affects all AWS services. Many services do not appear in this list because the overall service is out of scope, but the security aspects of the service are in scope. For example, a candidate for this exam would not be asked about the steps to set up replication for an S3 bucket, but the candidate might be asked about configuring an S3 bucket policy.

Management and Governance:
 AWS Audit Manager
 AWS CloudTrail
 Amazon CloudWatch
 AWS Config
 AWS Organizations
 AWS Systems Manager
 AWS Trusted Advisor

Networking and Content Delivery:
 Amazon Detective
 AWS Firewall Manager
 AWS Network Firewall
 AWS Security Hub
 AWS Shield
 Amazon VPC
    o VPC endpoints
    o Network ACLs
    o Security groups
 AWS WAF

Security, Identity, and Compliance:
 AWS Certificate Manager (ACM)
 AWS CloudHSM
 AWS Directory Service
 Amazon GuardDuty
 AWS Identity and Access Management (IAM)
 Amazon Inspector
 AWS Key Management Service (AWS KMS)
 Amazon Macie
 AWS Single Sign-On

Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content. Services or features that are entirely unrelated to the target job roles for the exam are excluded from this list because they are assumed to be irrelevant.

Out-of-scope AWS services and features include the following:
 Application development services
 IoT services
 Machine learning (ML) services
 Media services
 Migration and transfer services

Examkingdom AWS Certified Security Specialty (SCS-C01) Exam Brain dump pdf, Certkingdom AWS Certified Security Specialty (SCS-C01) Brain Dumps PDF

MCTS Training, MCITP Trainnig

Best AWS Certified Security Specialty (SCS-C01) Certification, AWS Certified Security Specialty (SCS-C01) Brain Dumps Training at Certkingdom.com

QUESTION 1
The Security team believes that a former employee may have gained unauthorized access to AWS resources sometime in the past 3 months by using an identified access key.
What approach would enable the Security team to find out what the former employee may have done within AWS?

A. Use the AWS CloudTrail console to search for user activity.
B. Use the Amazon CloudWatch Logs console to filter CloudTrail data by user.
C. Use AWS Config to see what actions were taken by the user.
D. Use Amazon Athena to query CloudTrail logs stored in Amazon S3.

Answer: A

QUESTION 2
A company is storing data in Amazon S3 Glacier. The security engineer implemented a new vault lock policy for 10TB of data and called initiate-vault-lock operation 12 hours ago. The audit team identified a typo in the policy that is allowing unintended access to the vault.
What is the MOST cost-effective way to correct this?

A. Call the abort-vault-lock operation. Update the policy. Call the initiate-vault-lock operation again.
B. Copy the vault data to a new S3 bucket. Delete the vault. Create a new vault with the data.
C. Update the policy to keep the vault lock in place.
D. Update the policy. Call initiate-vault-lock operation again to apply the new policy.

Answer: A

Explanation/Reference:
Initiate the lock by attaching a vault lock policy to your vault, which sets the lock to an in-progress state and returns a lock ID. While in the in-progress state, you have 24 hours to validate your vault lock policy before the lock ID expires.
Use the lock ID to complete the lock process. If the vault lock policy doesn’t work as expected, you can abort the lock and restart from the beginning. For information on how to use the S3 Glacier API to lock a vault, see Locking a Vault by Using the Amazon S3 Glacier API.

QUESTION 3

A company wants to control access to its AWS resources by using identities and groups that are defined in its
existing Microsoft Active Directory.
What must the company create in its AWS account to map permissions for AWS services to Active Directory user attributes?

A. AWS IAM groups
B. AWS IAM users
C. AWS IAM roles
D. AWS IAM access keys

Answer: C

Explanation/Reference:
Reference: https://aws.amazon.com/blogs/security/how-to-connect-your-on-premises-active-directory-to-awsusingad-connector/

QUESTION 4
A company has contracted with a third party to audit several AWS accounts. To enable the audit, crossaccount
IAM roles have been created in each account targeted for audit. The Auditor is having trouble
accessing some of the accounts.
Which of the following may be causing this problem? (Choose three.)

A. The external ID used by the Auditor is missing or incorrect.
B. The Auditor is using the incorrect password.
C. The Auditor has not been granted sts:AssumeRole for the role in the destination account.
D. The Amazon EC2 role used by the Auditor must be set to the destination account role.
E. The secret key used by the Auditor is missing or incorrect.
F. The role ARN used by the Auditor is missing or incorrect.

Answer: C,EF

QUESTION 5
Compliance requirements state that all communications between company on-premises hosts and EC2
instances be encrypted in transit. Hosts use custom proprietary protocols for their communication, and EC2
instances need to be fronted by a load balancer for increased availability.
Which of the following solutions will meet these requirements?

A. Offload SSL termination onto an SSL listener on a Classic Load Balancer, and use a TCP connection between the load balancer and the EC2 instances.
B. Route all traffic through a TCP listener on a Classic Load Balancer, and terminate the TLS connection on the EC2 instances.
C. Create an HTTPS listener using an Application Load Balancer, and route all of the communication through that load balancer.
D. Offload SSL termination onto an SSL listener using an Application Load Balancer, and re-spawn and SSL connection between the load balancer and the EC2 instances.

Answer: B
 

AWS Certified Solutions Architect – Associate (SAA-C03) Exam Guide

Introduction
The AWS Certified Solutions Architect – Associate (SAA-C03) exam is intended for individuals who perform in a solutions architect role. The exam validates a candidate’s ability to use AWS technologies to design solutions based on the AWS Well-Architected Framework.

The exam also validates a candidate’s ability to complete the following tasks:
• Design solutions that incorporate AWS services to meet current business requirements and future projected needs
• Design architectures that are secure, resilient, high-performing, and cost-optimized
• Review existing solutions and determine improvements

Target candidate description
The target candidate should have at least 1 year of hands-on experience designing cloud solutions that use AWS services.
For a detailed list of specific tools and technologies that might be covered on the exam, as well as lists of in-scope and out-of-scope AWS services, refer to the Appendix.

Exam content Response types
There are two types of questions on the exam:
• Multiple choice: Has one correct response and three incorrect responses (distractors)
• Multiple response: Has two or more correct responses out of five or more response options

Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill might choose. Distractors are generally plausible responses that match the content area.

Unanswered questions are scored as incorrect; there is no penalty for guessing. The exam includes 50 questions that will affect your score.

Unscored content
The exam includes 15 unscored questions that do not affect your score. AWS collects information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified Solutions Architect – Associate exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.
Your results for the exam are reported as a scaled score of 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models help equate scores across multiple exam forms that might have slightly different difficulty levels.

Your score report could contain a table of classifications of your performance at each section level. This information provides general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each section. You need to pass only the overall exam.

Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback. Candidates who pass the exam will not receive this additional information.

Content outline

This exam guide includes weightings, test domains, and task statements for the exam. It is not a comprehensive listing of the content on the exam. However, additional context for each of the task statements is available to help guide your preparation for the exam. The following table lists the main content domains and their weightings. The table precedes the complete exam content outline, which includes the additional context. The percentage in each domain represents only scored content.

Domain 1: Design Secure Architectures 30%
Domain 2: Design Resilient Architectures 26%
Domain 3: Design High-Performing Architectures 24%
Domain 4: Design Cost-Optimized Architectures 20%

Domain 1: Design Secure Architectures
Task Statement 1: Design secure access to AWS resources.

Knowledge of:
• Access controls and management across multiple accounts
• AWS federated access and identity services (for example, AWS Identity and Access Management [IAM], AWS Single Sign-On [AWS SSO])
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS security best practices (for example, the principle of least privilege)
• The AWS shared responsibility model

Skills in:
• Applying AWS security best practices to IAM users and root users (for example, multi-factor authentication [MFA])
• Designing a flexible authorization model that includes IAM users, groups, roles, and policies
• Designing a role-based access control strategy (for example, AWS Security Token Service [AWS STS], role switching, cross-account access)
• Designing a security strategy for multiple AWS accounts (for example, AWS Control Tower, service control policies [SCPs])
• Determining the appropriate use of resource policies for AWS services
• Determining when to federate a directory service with IAM roles

Task Statement 2: Design secure workloads and applications.

Knowledge of:

• Application configuration and credentials security
• AWS service endpoints
• Control ports, protocols, and network traffic on AWS
• Secure application access
• Security services with appropriate use cases (for example, Amazon Cognito, Amazon GuardDuty, Amazon Macie)
• Threat vectors external to AWS (for example, DDoS, SQL injection)

Skills in:
• Designing VPC architectures with security components (for example, security groups, route tables, network ACLs, NAT gateways)
• Determining network segmentation strategies (for example, using public subnets and private subnets)
• Integrating AWS services to secure applications (for example, AWS Shield, AWS WAF, AWS SSO, AWS Secrets Manager)
• Securing external network connections to and from the AWS Cloud (for example, VPN, AWS Direct Connect)

Task Statement 3: Determine appropriate data security controls.
Knowledge of:
• Data access and governance
• Data recovery
• Data retention and classification
• Encryption and appropriate key management

Skills in:
• Aligning AWS technologies to meet compliance requirements
• Encrypting data at rest (for example, AWS Key Management Service [AWS KMS])
• Encrypting data in transit (for example, AWS Certificate Manager [ACM] using TLS)
• Implementing access policies for encryption keys
• Implementing data backups and replications
• Implementing policies for data access, lifecycle, and protection
• Rotating encryption keys and renewing certificates

Domain 2: Design Resilient Architectures

Task Statement 1: Design scalable and loosely coupled architectures.

Knowledge of:
• API creation and management (for example, Amazon API Gateway, REST API)
• AWS managed services with appropriate use cases (for example, AWS Transfer Family, Amazon Simple Queue Service [Amazon SQS], Secrets Manager)
• Caching strategies
• Design principles for microservices (for example, stateless workloads compared with stateful workloads)
• Event-driven architectures
• Horizontal scaling and vertical scaling
• How to appropriately use edge accelerators (for example, content delivery network [CDN])
• How to migrate applications into containers
• Load balancing concepts (for example, Application Load Balancer)
• Multi-tier architectures
• Queuing and messaging concepts (for example, publish/subscribe)
• Serverless technologies and patterns (for example, AWS Fargate, AWS Lambda)
• Storage types with associated characteristics (for example, object, file, block)
• The orchestration of containers (for example, Amazon Elastic Container Service [Amazon ECS], Amazon Elastic Kubernetes Service [Amazon EKS])
• When to use read replicas
• Workflow orchestration (for example, AWS Step Functions)

Skills in:
• Designing event-driven, microservice, and/or multi-tier architectures based on requirements
• Determining scaling strategies for components used in an architecture design
• Determining the AWS services required to achieve loose coupling based on requirements
• Determining when to use containers
• Determining when to use serverless technologies and patterns
• Recommending appropriate compute, storage, networking, and database technologies based on requirements
• Using purpose-built AWS services for workloads

Task Statement 2: Design highly available and/or fault-tolerant architectures.

Knowledge of:

• AWS global infrastructure (for example, Availability Zones, AWS Regions, Amazon Route 53)
• AWS managed services with appropriate use cases (for example, Amazon Comprehend, Amazon Polly)
• Basic networking concepts (for example, route tables)
• Disaster recovery (DR) strategies (for example, backup and restore, pilot light, warm standby, active-active failover, recovery point objective [RPO], recovery time objective [RTO])
• Distributed design patterns
• Failover strategies
• Immutable infrastructure
• Load balancing concepts (for example, Application Load Balancer)
• Proxy concepts (for example, Amazon RDS Proxy)
• Service quotas and throttling (for example, how to configure the service quotas for a workload in a standby environment)
• Storage options and characteristics (for example, durability, replication)
• Workload visibility (for example, AWS X-Ray)

Skills in:

• Determining automation strategies to ensure infrastructure integrity
• Determining the AWS services required to provide a highly available and/or fault-tolerant architecture across AWS Regions or Availability Zones
• Identifying metrics based on business requirements to deliver a highly available solution
• Implementing designs to mitigate single points of failure
• Implementing strategies to ensure the durability and availability of data (for example, backups)
• Selecting an appropriate DR strategy to meet business requirements
• Using AWS services that improve the reliability of legacy applications and applications not built for the cloud (for example, when application changes are not possible)
• Using purpose-built AWS services for workloads

Domain 3: Design High-Performing Architectures
Task Statement 1: Determine high-performing and/or scalable storage solutions.
Knowledge of:
• Hybrid storage solutions to meet business requirements
• Storage services with appropriate use cases (for example, Amazon S3, Amazon Elastic File System [Amazon EFS], Amazon Elastic Block Store [Amazon EBS])
• Storage types with associated characteristics (for example, object, file, block)

Skills in:
• Determining storage services and configurations that meet performance demands
• Determining storage services that can scale to accommodate future needs

Task Statement 2: Design high-performing and elastic compute solutions.

Knowledge of:

• AWS compute services with appropriate use cases (for example, AWS Batch, Amazon EMR, Fargate)
• Distributed computing concepts supported by AWS global infrastructure and edge services
• Queuing and messaging concepts (for example, publish/subscribe)
• Scalability capabilities with appropriate use cases (for example, Amazon EC2 Auto Scaling, AWS Auto Scaling)
• Serverless technologies and patterns (for example, Lambda, Fargate)
• The orchestration of containers (for example, Amazon ECS, Amazon EKS)

Skills in:

• Decoupling workloads so that components can scale independently
• Identifying metrics and conditions to perform scaling actions
• Selecting the appropriate compute options and features (for example, EC2 instance types) to meet business requirements
• Selecting the appropriate resource type and size (for example, the amount of Lambda memory) to meet business requirements

Task Statement 3: Determine high-performing database solutions.
Knowledge of:
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• Caching strategies and services (for example, Amazon ElastiCache)
• Data access patterns (for example, read-intensive compared with write-intensive)
• Database capacity planning (for example, capacity units, instance types, Provisioned IOPS)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, serverless, relational compared with non-relational, in-memory)

Skills in:
• Configuring read replicas to meet business requirements
• Designing database architectures
• Determining an appropriate database engine (for example, MySQL compared with PostgreSQL)
• Determining an appropriate database type (for example, Amazon Aurora, Amazon DynamoDB)
• Integrating caching to meet business requirements

Task Statement 4: Determine high-performing and/or scalable network architectures.
Knowledge of:
• Edge networking services with appropriate use cases (for example, Amazon CloudFront, AWS Global Accelerator)
• How to design network architecture (for example, subnet tiers, routing, IP addressing)
• Load balancing concepts (for example, Application Load Balancer)
• Network connection options (for example, AWS VPN, Direct Connect, AWS PrivateLink)

Skills in:
• Creating a network topology for various architectures (for example, global, hybrid, multi-tier)
• Determining network configurations that can scale to accommodate future needs
• Determining the appropriate placement of resources to meet business requirements
• Selecting the appropriate load balancing strategy

Task Statement 5: Determine high-performing data ingestion and transformation solutions.
Knowledge of:

• Data analytics and visualization services with appropriate use cases (for example, Amazon Athena, AWS Lake Formation, Amazon QuickSight)
• Data ingestion patterns (for example, frequency)
• Data transfer services with appropriate use cases (for example, AWS DataSync, AWS Storage Gateway)
• Data transformation services with appropriate use cases (for example, AWS Glue)
• Secure access to ingestion access points
• Sizes and speeds needed to meet business requirements
• Streaming data services with appropriate use cases (for example, Amazon Kinesis)

Skills in:
• Building and securing data lakes
• Designing data streaming architectures
• Designing data transfer solutions
• Implementing visualization strategies
• Selecting appropriate compute options for data processing (for example, Amazon EMR)
• Selecting appropriate configurations for ingestion
• Transforming data between formats (for example, .csv to .parquet)

Domain 4: Design Cost-Optimized Architectures
Task Statement 1: Design cost-optimized storage solutions.
Knowledge of:
• Access options (for example, an S3 bucket with Requester Pays object storage)
• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, AWS Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS storage services with appropriate use cases (for example, Amazon FSx, Amazon EFS, Amazon S3, Amazon EBS)
• Backup strategies
• Block storage options (for example, hard disk drive [HDD] volume types, solid state drive [SSD] volume types)
• Data lifecycles
• Hybrid storage options (for example, DataSync, Transfer Family, Storage Gateway)
• Storage access patterns
• Storage tiering (for example, cold tiering for object storage)
• Storage types with associated characteristics (for example, object, file, block)

Skills in:

• Designing appropriate storage strategies (for example, batch uploads to Amazon S3 compared with individual uploads)
• Determining the correct storage size for a workload
• Determining the lowest cost method of transferring data for a workload to AWS storage
• Determining when storage auto scaling is required
• Managing S3 object lifecycles
• Selecting the appropriate backup and/or archival solution
• Selecting the appropriate service for data migration to storage services
• Selecting the appropriate storage tier
• Selecting the correct data lifecycle for storage
• Selecting the most cost-effective storage service for a workload

Task Statement 2: Design cost-optimized compute solutions.
Knowledge of:

• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• AWS global infrastructure (for example, Availability Zones, AWS Regions)
• AWS purchasing options (for example, Spot Instances, Reserved Instances, Savings Plans)
• Distributed compute strategies (for example, edge processing)
• Hybrid compute options (for example, AWS Outposts, AWS Snowball Edge)
• Instance types, families, and sizes (for example, memory optimized, compute optimized, virtualization)
• Optimization of compute utilization (for example, containers, serverless computing, microservices)
• Scaling strategies (for example, auto scaling, hibernation)

Skills in:
• Determining an appropriate load balancing strategy (for example, Application Load Balancer [Layer 7] compared with Network Load Balancer [Layer 4] compared with Gateway Load Balancer)
• Determining appropriate scaling methods and strategies for elastic workloads (for example, horizontal compared with vertical, EC2 hibernation)
• Determining cost-effective AWS compute services with appropriate use cases (for example, Lambda, Amazon EC2, Fargate)
• Determining the required availability for different classes of workloads (for example, production workloads, non-production workloads)
• Selecting the appropriate instance family for a workload
• Selecting the appropriate instance size for a workload

Task Statement 3: Design cost-optimized database solutions.
Knowledge of:

• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Caching strategies
• Data retention policies
• Database capacity planning (for example, capacity units)
• Database connections and proxies
• Database engines with appropriate use cases (for example, heterogeneous migrations, homogeneous migrations)
• Database replication (for example, read replicas)
• Database types and services (for example, relational compared with non-relational, Aurora, DynamoDB)

Skills in:
• Designing appropriate backup and retention policies (for example, snapshot frequency)
• Determining an appropriate database engine (for example, MySQL compared with PostgreSQL)
• Determining cost-effective AWS database services with appropriate use cases (for example, DynamoDB compared with Amazon RDS, serverless)
• Determining cost-effective AWS database types (for example, time series format, columnar format)
• Migrating database schemas and data to different locations and/or different database engines

Task Statement 4: Design cost-optimized network architectures.
Knowledge of:

• AWS cost management service features (for example, cost allocation tags, multi-account billing)
• AWS cost management tools with appropriate use cases (for example, Cost Explorer, AWS Budgets, AWS Cost and Usage Report)
• Load balancing concepts (for example, Application Load Balancer)
• NAT gateways (for example, NAT instance costs compared with NAT gateway costs)
• Network connectivity (for example, private lines, dedicated lines, VPNs)
• Network routing, topology, and peering (for example, AWS Transit Gateway, VPC peering)
• Network services with appropriate use cases (for example, DNS)

Skills in:
• Configuring appropriate NAT gateway types for a network (for example, a single shared NAT gateway compared with NAT gateways for each Availability Zone)
• Configuring appropriate network connections (for example, Direct Connect compared with VPN compared with internet)
• Configuring appropriate network routes to minimize network transfer costs (for example, Region to Region, Availability Zone to Availability Zone, private to public, Global Accelerator, VPC endpoints)
• Determining strategic needs for content delivery networks (CDNs) and edge caching
• Reviewing existing workloads for network optimizations
• Selecting an appropriate throttling strategy
• Selecting the appropriate bandwidth allocation for a network device (for example, a single VPN compared with multiple VPNs, Direct Connect speed)

Examkingdom Amazon AWS SAA-C03 Exam Brain dump pdf, Certkingdom Amazon AWS SAA-C03 Brain Dumps PDF

MCTS Training, MCITP Trainnig

Best Amazon AWS SAA-C03 Certification, Amazon AWS SAA-C03 Brain Dumps Training at certkingdom.com

Question 1: 
A software development company is using serverless computing with AWS Lambda to build and run applications without having to set up or manage servers. They have a Lambda function that connects to a MongoDB Atlas, which is a popular Database as a Service (DBaaS) platform and also uses a third party API to fetch certain data for their application. One of the developers was instructed to create the environment variables for the MongoDB database hostname, username, and password as well as the API credentials that will be used by the Lambda function for DEV, SIT, UAT, and PROD environments.
Considering that the Lambda function is storing sensitive database and API credentials, how can this information be secured to prevent other developers in the team, or anyone, from seeing these credentials in plain text? Select the best option that provides maximum security.

A. Enable SSL encryption that leverages on AWS CloudHSM to store and encrypt the sensitive information.
B. AWS Lambda does not provide encryption for the environment variables. Deploy your code to an EC2 instance instead.
C. There is no need to do anything because, by default, AWS Lambda already encrypts the environment variables using the AWS Key Management Service.
D. Create a new KMS key and use it to enable encryption helpers that leverage on AWS Key Management Service to store and encrypt the sensitive information.

Correct Answer: D

Explanation
When you create or update Lambda functions that use environment variables, AWS Lambda encrypts them using the AWS Key Management Service. When your Lambda function is invoked, those values are decrypted and made available to the Lambda code.

The first time you create or update Lambda functions that use environment variables in a region, a default service key is created for you automatically within AWS KMS. This key is used to encrypt environment variables. However, if you wish to use encryption helpers and use KMS to encrypt environment variables after your Lambda function is created, you must create your own AWS KMS key and choose it instead of the default key. The default key will give errors when chosen. Creating your own key gives you more flexibility, including the ability to create, rotate, disable, and define access controls, and to audit the encryption keys used to protect your data.

Question 2: 
A company hosted an e-commerce website on an Auto Scaling group of EC2 instances behind an Application Load Balancer. The Solutions Architect noticed that the website is receiving a large number of illegitimate external requests from multiple systems with IP addresses that constantly change. To resolve the performance issues, the Solutions Architect must implement a solution that would block the illegitimate requests with minimal impact on legitimate traffic.
Which of the following options fulfills this requirement?

A. Create a regular rule in AWS WAF and associate the web ACL to an Application Load Balancer.
B. Create a rate-based rule in AWS WAF and associate the web ACL to an Application Load Balancer.
C. Create a custom rule in the security group of the Application Load Balancer to block the offending requests.
D. Create a custom network ACL and associate it with the subnet of the Application Load Balancer to block the offending requests.

Correct Answer: B

Question 4: 
There was an incident in your production environment where the user data stored in the S3 bucket has been accidentally deleted by one of the Junior DevOps Engineers. The issue was escalated to your manager and after a few days, you were instructed to improve the security and protection of your AWS resources.   
What combination of the following options will protect the S3 objects in your bucket from both accidental deletion and overwriting? (Select TWO.)

A. Enable Versioning
B. Enable Amazon S3 Intelligent-Tiering
C. Provide access to S3 data strictly through pre-signed URL only
D. Enable Multi-Factor Authentication Delete
E. Disallow S3 Delete using an IAM bucket policy

Correct Answer: B,D

Question 5:
A popular social media website uses a CloudFront web distribution to serve their static contents to their millions of users around the globe. They are receiving a number of complaints recently that their users take a lot of time to log into their website. There are also occasions when their users are getting HTTP 504 errors. You are instructed by your manager to significantly reduce the user’s login time to further optimize the system.
Which of the following options should you use together to set up a cost-effective solution that can improve your application’s performance? (Select TWO.)

A. Customize the content that the CloudFront web distribution delivers to your users using Lambda@Edge, which allows your Lambda functions to execute the authentication process in AWS locations closer to the users.
B. Deploy your application to multiple AWS regions to accommodate your users around the world. Set up a Route 53 record with latency routing policy to route incoming traffic to the region that provides the best latency to the user.
C. Configure your origin to add a Cache-Control max-age directive to your objects, and specify the longest practical value for max-age to increase the cache hit ratio of your CloudFront distribution.
D. Set up an origin failover by creating an origin group with two origins. Specify one as the primary origin and the other as the second origin which CloudFront automatically switches to when the primary origin returns specific HTTP status code failure responses.
E. Use multiple and geographically disperse VPCs to various AWS regions then create a transit VPC to connect all of your resources. In order to handle the requests faster, set up Lambda functions in each region using the AWS Serverless Application Model (SAM) service.

Correct Answer: A,D

Question 6: 
A company is using Amazon S3 to store frequently accessed data. When an object is created or deleted, the S3 bucket will send an event notification to the Amazon SQS queue. A solutions architect needs to create a solution that will notify the development and operations team about the created or deleted objects.
Which of the following would satisfy this requirement?

A. Create a new Amazon SNS FIFO topic for the other team. Grant Amazon S3 permission to send the notification to the second SNS topic.
B. Set up another Amazon SQS queue for the other team. Grant Amazon S3 permission to send a notification to the second SQS queue.
C. Set up an Amazon SNS topic and configure two Amazon SQS queues to poll the SNS topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic.
D. Create an Amazon SNS topic and configure two Amazon SQS queues to subscribe to the topic. Grant Amazon S3 permission to send notifications to Amazon SNS and update the bucket to use the new SNS topic.

Correct Answer: D

Appendix
Which key tools, technologies, and concepts might be covered on the exam?
The following is a non-exhaustive list of the tools and technologies that could appear on the exam. This list is subject to change and is provided to help you understand the general scope of services, features, or technologies on the exam. The general tools and technologies in this list appear in no particular order. AWS services are grouped according to their primary functions. While some of these technologies will likely be covered more than others on the exam, the order and placement of them in this list is no indication of relative weight or importance:

• Compute
• Cost management
• Database
• Disaster recovery
• High performance
• Management and governance
• Microservices and component decoupling
• Migration and data transfer
• Networking, connectivity, and content delivery
• Resiliency
• Security
• Serverless and event-driven design principles
• Storage


AWS services and features
Analytics:

• Amazon Athena
• AWS Data Exchange
• AWS Data Pipeline
• Amazon EMR
• AWS Glue
• Amazon Kinesis
• AWS Lake Formation
• Amazon Managed Streaming for Apache Kafka (Amazon MSK)
• Amazon OpenSearch Service (Amazon Elasticsearch Service)
• Amazon QuickSight
• Amazon Redshift

Application Integration:

• Amazon AppFlow
• AWS AppSync
• Amazon EventBridge (Amazon CloudWatch Events)
• Amazon MQ
• Amazon Simple Notification Service (Amazon SNS)
• Amazon Simple Queue Service (Amazon SQS)
• AWS Step Functions
Version 1.0 SAA-C03 12 | PAGE
AWS Cost Management:
• AWS Budgets
• AWS Cost and Usage Report
• AWS Cost Explorer
• Savings Plans
Compute:
• AWS Batch
• Amazon EC2
• Amazon EC2 Auto Scaling
• AWS Elastic Beanstalk
• AWS Outposts
• AWS Serverless Application Repository
• VMware Cloud on AWS
• AWS Wavelength
Containers:
• Amazon Elastic Container Registry (Amazon ECR)
• Amazon Elastic Container Service (Amazon ECS)
• Amazon ECS Anywhere
• Amazon Elastic Kubernetes Service (Amazon EKS)
• Amazon EKS Anywhere
• Amazon EKS Distro
Database:
• Amazon Aurora
• Amazon Aurora Serverless
• Amazon DocumentDB (with MongoDB compatibility)
• Amazon DynamoDB
• Amazon ElastiCache
• Amazon Keyspaces (for Apache Cassandra)
• Amazon Neptune
• Amazon Quantum Ledger Database (Amazon QLDB)
• Amazon RDS
• Amazon Redshift
• Amazon Timestream
Developer Tools:
• AWS X-Ray
Front-End Web and Mobile:
• AWS Amplify
• Amazon API Gateway
• AWS Device Farm
• Amazon Pinpoint
Machine Learning:
• Amazon Comprehend
• Amazon Forecast
• Amazon Fraud Detector
• Amazon Kendra
• Amazon Lex
• Amazon Polly
• Amazon Rekognition
• Amazon SageMaker
• Amazon Textract
• Amazon Transcribe
• Amazon Translate
Management and Governance:

 

• AWS Auto Scaling
• AWS CloudFormation
• AWS CloudTrail
• Amazon CloudWatch
• AWS Command Line Interface (AWS CLI)
• AWS Compute Optimizer
• AWS Config
• AWS Control Tower
• AWS License Manager
• Amazon Managed Grafana
• Amazon Managed Service for Prometheus
• AWS Management Console
• AWS Organizations
• AWS Personal Health Dashboard
• AWS Proton
• AWS Service Catalog
• AWS Systems Manager
• AWS Trusted Advisor
• AWS Well-Architected Tool

Media Services:
• Amazon Elastic Transcoder
• Amazon Kinesis Video Streams
Migration and Transfer:
• AWS Application Discovery Service
• AWS Application Migration Service (CloudEndure Migration)
• AWS Database Migration Service (AWS DMS)
• AWS DataSync
• AWS Migration Hub
• AWS Server Migration Service (AWS SMS)
• AWS Snow Family
• AWS Transfer Family
Version 1.0 SAA-C03 14 | PAGE
Networking and Content Delivery:
• Amazon CloudFront
• AWS Direct Connect
• Elastic Load Balancing (ELB)
• AWS Global Accelerator
• AWS PrivateLink
• Amazon Route 53
• AWS Transit Gateway
• Amazon VPC
• AWS VPN
Security, Identity, and Compliance:
• AWS Artifact
• AWS Audit Manager
• AWS Certificate Manager (ACM)
• AWS CloudHSM
• Amazon Cognito
• Amazon Detective
• AWS Directory Service
• AWS Firewall Manager
• Amazon GuardDuty
• AWS Identity and Access Management (IAM)
• Amazon Inspector
• AWS Key Management Service (AWS KMS)
• Amazon Macie
• AWS Network Firewall
• AWS Resource Access Manager (AWS RAM)
• AWS Secrets Manager
• AWS Security Hub
• AWS Shield
• AWS Single Sign-On
• AWS WAF
Serverless:
• AWS AppSync
• AWS Fargate
• AWS Lambda
Storage:
• AWS Backup
• Amazon Elastic Block Store (Amazon EBS)
• Amazon Elastic File System (Amazon EFS)
• Amazon FSx (for all types)
• Amazon S3
• Amazon S3 Glacier
• AWS Storage Gateway

Out-of-scope AWS services and features
The following is a non-exhaustive list of AWS services and features that are not covered on the exam. These services and features do not represent every AWS offering that is excluded from the exam content.
Analytics:
• Amazon CloudSearch
Application Integration:
• Amazon Managed Workflows for Apache Airflow (Amazon MWAA)
AR and VR:
• Amazon Sumerian
Blockchain:
• Amazon Managed Blockchain
Compute:
• Amazon Lightsail
Database:
• Amazon RDS on VMware
Developer Tools:
• AWS Cloud9
• AWS Cloud Development Kit (AWS CDK)
• AWS CloudShell
• AWS CodeArtifact
• AWS CodeBuild
• AWS CodeCommit
• AWS CodeDeploy
• Amazon CodeGuru
• AWS CodeStar
• Amazon Corretto
• AWS Fault Injection Simulator (AWS FIS)
• AWS Tools and SDKs
Front-End Web and Mobile:
• Amazon Location Service
Game Tech:
• Amazon GameLift
• Amazon Lumberyard
Internet of Things:
• All services
Version 1.0 SAA-C03 16 | PAGE
Machine Learning:
• Apache MXNet on AWS
• Amazon Augmented AI (Amazon A2I)
• AWS DeepComposer
• AWS Deep Learning AMIs (DLAMI)
• AWS Deep Learning Containers
• AWS DeepLens
• AWS DeepRacer
• Amazon DevOps Guru
• Amazon Elastic Inference
• Amazon HealthLake
• AWS Inferentia
• Amazon Lookout for Equipment
• Amazon Lookout for Metrics
• Amazon Lookout for Vision
• Amazon Monitron
• AWS Panorama
• Amazon Personalize
• PyTorch on AWS
• Amazon SageMaker Data Wrangler
• Amazon SageMaker Ground Truth
• TensorFlow on AWS
Management and Governance:
• AWS Chatbot
• AWS Console Mobile Application
• AWS Distro for OpenTelemetry
• AWS OpsWorks
Media Services:
• AWS Elemental Appliances and Software
• AWS Elemental MediaConnect
• AWS Elemental MediaConvert
• AWS Elemental MediaLive
• AWS Elemental MediaPackage
• AWS Elemental MediaStore
• AWS Elemental MediaTailor
• Amazon Interactive Video Service (Amazon IVS)
Migration and Transfer:
• Migration Evaluator (formerly TSO Logic)
Networking and Content Delivery:
• AWS App Mesh
• AWS Cloud Map
Version 1.0 SAA-C03 17 | PAGE
Quantum Technologies:
• Amazon Braket
Robotics:
• AWS RoboMaker
Satellite:
• AWS Ground Station

SOA-C02 AWS Certified SysOps Administrator – Associate SOA-C02 Exam

Introduction
The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is intended for system administrators in a cloud operations role who have at least 1 year of hands-on experience with deployment, management, networking, and security on AWS.

The exam validates a candidate’s ability to complete the following tasks:
 Deploy, manage, and operate workloads on AWS
 Support and maintain AWS workloads according to the AWS Well-Architected Framework
 Perform operations by using the AWS Management Console and the AWS CLI
 Implement security controls to meet compliance requirements
 Monitor, log, and troubleshoot systems
 Apply networking concepts (for example, DNS, TCP/IP, firewalls)
 Implement architectural requirements (for example, high availability, performance, capacity)
 Perform business continuity and disaster recovery procedures
 Identify, classify, and remediate incidents

Recommended AWS knowledge
 Minimum of 1 year of hands-on experience with AWS technology
 Experience in deploying, managing, and operating workloads on AWS
 Understanding of the AWS Well-Architected Framework
 Hands-on experience with the AWS Management Console and the AWS CLI
 Understanding of AWS networking and security services
 Hands-on experience in implementing security controls and compliance requirements

Exam content Response types
Three types of questions can appear on the exam. You might see some, or all, of these question types:
 Multiple choice: Has one correct response and three incorrect responses (distractors).
 Multiple response: Has two correct responses out of five options.
 Exam lab: Has a scenario that is composed of a set of tasks to perform in the AWS Management Console or AWS CLI.

Multiple choice and multiple response: Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that a candidate with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area that is defined by the test objective.
Unanswered questions are scored as incorrect; there is no penalty for guessing.

All multiple-choice and multiple-response questions will appear at the start of the exam in one section. The end of this section will include a review screen, where you can return to any of the multiple-choice and multiple-response questions. This will be the last opportunity to answer the questions or change any answer selections. If your exam contains exam labs, that section will appear after the multiple-choice and multiple-response section. You will NOT be able to go back to the first section after you start the second section.

Exam labs: Complete the required tasks for a given scenario in the AWS Management Console or AWS CLI in the provided AWS account.

When you begin your exam, you will receive notification about the number of questions in the multiple-choice and multiple-response section, and the number of exam labs in the exam lab section. You will also learn the percentage of your score that will be determined by your work in the exam labs. Plan to leave 20 minutes to complete each exam lab.
Finish all work on an exam lab before moving to the next exam lab. You will NOT be able to return to a prior exam lab. You are welcome to use the virtual machine notepad or AWS CLI while working on your exam labs.

There might be more than one way to perform an exam lab. In those cases, you will receive full credit if you achieve the correct end state to the scenario. You will receive partial credit for partial completion of exam labs. However, exam content and the associated scoring are confidential, so you will receive no further information regarding partial credit that is awarded for an exam lab.
Tip: If you take your exam through online proctoring, you can use an external monitor as your ONLY display. Set your screen resolution to 280 pixels x 1024 pixels or greater for a PC, and 1440 pixels x 900 pixels or greater for a Mac. Set the scaling to 100%. Set the scaling to 100%. Other settings might result in a need to scroll within the console.
For a sample of the multiple-choice and multiple-response questions and exam labs, view the AWS Certified SysOps Administrator – Associate (SOA-C02) Sample Exam Questions document.

Unscored content
The exam will include unscored questions that do not affect your score. AWS will gather information about candidate performance on these unscored questions to evaluate these questions for future use as scored questions. These unscored questions are not identified on the exam.

Exam results
The AWS Certified SysOps Administrator – Associate (SOA-C02) exam is a pass or fail exam. The exam is scored against a minimum standard established by AWS professionals who follow certification industry best practices and guidelines.

Your results for the exam are reported as a score from 100–1,000. The minimum passing score is 720. Your score shows how you performed on the exam as a whole and whether or not you passed. Scaled scoring models are used to equate scores across multiple exam forms that might have slightly different difficulty levels.

Your score report contains a table that classifies your performance at each section level. This information is intended to provide general feedback about your exam performance. The exam uses a compensatory scoring model, which means that you do not need to achieve a passing score in each individual section. You need to pass only the overall exam.
Each section of the exam has a specific weighting, so some sections have more questions than other sections have. The table contains general information that highlights your strengths and weaknesses. Use caution when interpreting section-level feedback.

Content outline
This exam guide includes weightings, test domains, objectives, and example tasks only. It is not a comprehensive listing of the content on this exam. The following table lists the main content domains and their
weightings.

Domain % of Exam
Domain 1: Monitoring, Logging, and Remediation 20%
Domain 2: Reliability and Business Continuity 16%
Domain 3: Deployment, Provisioning, and Automation 18%
Domain 4: Security and Compliance 16%
Domain 5: Networking and Content Delivery 18%
Domain 6: Cost and Performance Optimization 12%
TOTAL 100%

Domain 1: Monitoring, Logging, and Remediation
1.1 Implement metrics, alarms, and filters by using AWS monitoring and logging services
 Identify, collect, analyze, and export logs (for example, Amazon CloudWatch Logs, CloudWatch Logs Insights, AWS CloudTrail logs)
 Collect metrics and logs using the CloudWatch agent
 Create CloudWatch alarms
 Create metric filters
 Create CloudWatch dashboards
 Configure notifications (for example, Amazon Simple Notification Service [Amazon SNS], Service Quotas, CloudWatch alarms, AWS Health events)

1.2 Remediate issues based on monitoring and availability metrics
 Troubleshoot or take corrective actions based on notifications and alarms
 Configure Amazon EventBridge rules to trigger actions
 Use AWS Systems Manager Automation documents to take action based on AWS Config rules

Domain 2: Reliability and Business Continuity
2.1 Implement scalability and elasticity
 Create and maintain AWS Auto Scaling plans
 Implement caching
 Implement Amazon RDS replicas and Amazon Aurora Replicas
 Implement loosely coupled architectures
 Differentiate between horizontal scaling and vertical scaling

2.2 Implement high availability and resilient environments
 Configure Elastic Load Balancer and Amazon Route 53 health checks
 Differentiate between the use of a single Availability Zone and Multi-AZ deployments (for example, Amazon EC2 Auto Scaling groups, Elastic Load Balancing, Amazon FSx, Amazon RDS)
 Implement fault-tolerant workloads (for example, Amazon Elastic File System [Amazon EFS], Elastic IP addresses)
 Implement Route 53 routing policies (for example, failover, weighted, latency based)

2.3 Implement backup and restore strategies
 Automate snapshots and backups based on use cases (for example, RDS snapshots, AWS Backup, RTO and RPO, Amazon Data Lifecycle Manager, retention policy)
 Restore databases (for example, point-in-time restore, promote read replica)
 Implement versioning and lifecycle rules
 Configure Amazon S3 Cross-Region Replication
 Execute disaster recovery procedures

Domain 3: Deployment, Provisioning, and Automation
3.1 Provision and maintain cloud resources
 Create and manage AMIs (for example, EC2 Image Builder)
 Create, manage, and troubleshoot AWS CloudFormation
 Provision resources across multiple AWS Regions and accounts (for example, AWS Resource Access Manager, CloudFormation StackSets, IAM cross-account roles)
 Select deployment scenarios and services (for example, blue/green, rolling, canary)
 Identify and remediate deployment issues (for example, service quotas, subnet sizing, CloudFormation and AWS OpsWorks errors, permissions)

3.2 Automate manual or repeatable processes
 Use AWS services (for example, OpsWorks, Systems Manager, CloudFormation) to automate deployment processes
 Implement automated patch management
 Schedule automated tasks by using AWS services (for example, EventBridge, AWS Config)

Domain 4: Security and Compliance
4.1 Implement and manage security and compliance policies
 Implement IAM features (for example, password policies, MFA, roles, SAML, federated identity, resource policies, policy conditions)
 Troubleshoot and audit access issues by using AWS services (for example, CloudTrail, IAM Access Analyzer, IAM policy simulator)
 Validate service control policies and permission boundaries
 Review AWS Trusted Advisor security checks
 Validate AWS Region and service selections based on compliance requirements
 Implement secure multi-account strategies (for example, AWS Control Tower, AWS Organizations)

4.2 Implement data and infrastructure protection strategies
 Enforce a data classification scheme
 Create, manage, and protect encryption keys
 Implement encryption at rest (for example, AWS Key Management Service [AWS KMS])
 Implement encryption in transit (for example, AWS Certificate Manager, VPN)
 Securely store secrets by using AWS services (for example, AWS Secrets Manager, Systems Manager Parameter Store)
 Review reports or findings (for example, AWS Security Hub, Amazon GuardDuty, AWS Config, Amazon Inspector)

Domain 5: Networking and Content Delivery
5.1 Implement networking features and connectivity
 Configure a VPC (for example, subnets, route tables, network ACLs, security groups, NAT gateway, internet gateway )
 Configure private connectivity (for example, Systems Manager Session Manager, VPC endpoints, VPC peering, VPN)
 Configure AWS network protection services (for example, AWS WAF, AWS Shield)

5.2 Configure domains, DNS services, and content delivery

 Configure Route 53 hosted zones and records
 Implement Route 53 routing policies (for example, geolocation, geoproximity)
 Configure DNS (for example, Route 53 Resolver)
 Configure Amazon CloudFront and S3 origin access identity (OAI)
 Configure S3 static website hosting

5.3 Troubleshoot network connectivity issues
 Interpret VPC configurations (for example, subnets, route tables, network ACLs, security groups)
 Collect and interpret logs (for example, VPC Flow Logs, Elastic Load Balancer access logs, AWS WAF web ACL logs, CloudFront logs)
 Identify and remediate CloudFront caching issues
 Troubleshoot hybrid and private connectivity issues

Domain 6: Cost and Performance Optimization
6.1 Implement cost optimization strategies
 Implement cost allocation tags
 Identify and remediate underutilized or unused resources by using AWS services and tools (for example, Trusted Advisor, AWS Compute Optimizer, Cost Explorer)
 Configure AWS Budgets and billing alarms
 Assess resource usage patterns to qualify workloads for EC2 Spot Instances
 Identify opportunities to use managed services (for example, Amazon RDS, AWS Fargate, EFS)

6.2 Implement performance optimization strategies
 Recommend compute resources based on performance metrics
 Monitor Amazon EBS metrics and modify configuration to increase performance efficiency
 Implement S3 performance features (for example, S3 Transfer Acceleration, multipart uploads)
 Monitor RDS metrics and modify the configuration to increase performance efficiency (for example, performance insights, RDS Proxy)
 Enable enhanced EC2 capabilities (for example, enhanced network adapter, instance store, placement groups)

QUESTION 1
A SysOps administrator is creating two AWS CloudFormation templates. The first template will create a VPC
with associated resources, such as subnets, route tables, and an internet gateway. The second template will
deploy application resources within the VPC that was created by the first template. The second template
should refer to the resources created by the first template.
How can this be accomplished with the LEAST amount of administrative effort?

A. Add an export field to the outputs of the first template and import the values in the second template.
B. Create a custom resource that queries the stack created by the first template and retrieves the required values.
C. Create a mapping in the first template that is referenced by the second template.
D. Input the names of resources in the first template and refer to those names in the second template as a parameter.

Correct Answer: C

QUESTION 2
A company has deployed a web application in a VPC that has subnets in three Availability Zones. The
company launches three Amazon EC2 instances from an EC2 Auto Scaling group behind an Application Load Balancer (ALB).
A SysOps administrator notices that two of the EC2 instances are in the same Availability Zone, rather than
being distributed evenly across all three Availability Zones. There are no errors in the Auto Scaling group’s activity history.
What is the MOST likely reason for the unexpected placement of EC2 instances?

A. One Availability Zone did not have sufficient capacity for the requested EC2 instance type.
B. The ALB was configured for only two Availability Zones.
C. The Auto Scaling group was configured for only two Availability Zones.
D. Amazon EC2 Auto Scaling randomly placed the instances in Availability Zones.

Correct Answer: B

QUESTION 3
A company is running an application on premises and wants to use AWS for data backup. All of the data must
be available locally. The backup application can write only to block-based storage that is compatible with the
Portable Operating System Interface (POSIX).
Which backup solution will meet these requirements?

A. Configure the backup software to use Amazon S3 as the target for the data backups.
B. Configure the backup software to use Amazon S3 Glacier as the target for the data backups.
C. Use AWS Storage Gateway, and configure it to use gateway-cached volumes.
D. Use AWS Storage Gateway, and configure it to use gateway-stored volumes.

Correct Answer: D

QUESTION 4
A company asks a SysOps administrator to ensure that AWS CloudTrail files are not tampered with after they
are created. Currently, the company uses AWS Identity and Access Management (IAM) to restrict access to
specific trails. The company’s security team needs the ability to trace the integrity of each file.
What is the MOST operationally efficient solution that meets these requirements?

A. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that invokes an AWS Lambda function
when a new file is delivered. Configure the Lambda function to compute an MD5 hash check on the file and
store the result in an Amazon DynamoDB table. The security team can use the values that are stored in
DynamoDB to verify the integrity of the delivered files.

B. Create an AWS Lambda function that is invoked each time a new file is delivered to the CloudTrail bucket.
Configure the Lambda function to compute an MD5 hash check on the file and store the result as a tag in
an Amazon 53 object. The security team can use the information in the tag to verify the integrity of the
delivered files.

C. Enable the CloudTrail file integrity feature on an Amazon S3 bucket. Create an IAM policy that grants the
security team access to the file integrity logs that are stored in the S3 bucket.

D. Enable the CloudTrail file integrity feature on the trail. The security team can use the digest file that is
created by CloudTrail to verify the integrity of the delivered files.

Correct Answer: C

Actualkey Amazon AWS SOA-C02 Exam pdf, Certkingdom Amazon AWS SOA-C02 PDF

MCTS Training, MCITP Trainnig

Best Amazon AWS SOA-C02 Certification, Amazon AWS SOA-C02 Training at certkingdom.com

Amazon AWS Certified Developer Associate DVA-C01 Exam

The AWS Certified Developer – Associate examination is intended for individuals who perform a development role and have one or more years of hands-on experience developing and maintaining an AWS-based application.
Abilities Validated by the Certification

Demonstrate an understanding of core AWS services, uses, and basic AWS architecture best practices
Demonstrate proficiency in developing, deploying, and debugging cloud-based applications using AWS

Recommended Knowledge and Experience
In-depth knowledge of at least one high-level programming language
Understanding of core AWS services, uses, and basic AWS architecture best practices
Proficiency in developing, deploying, and debugging cloud-based applications using AWS
Ability to use the AWS service APIs, AWS CLI, and SDKs to write applications
Ability to identify key features of AWS services
Understanding of the AWS shared responsibility model
Understanding of application lifecycle management
Ability to use a CI/CD pipeline to deploy applications on AWS
Ability to use or interact with AWS services
Ability to apply a basic understanding of cloud-native applications to write code
Ability to write code using AWS security best practices (e.g., not using secret and access keys in the code, instead using IAM roles)
Ability to author, maintain, and debug code modules on AWS
Proficiency writing code for serverless applications
Understanding of the use of containers in the development process

Prepare for Your Exam
There is no better preparation than hands-on experience. There are many relevant AWS Training courses and other resources to assist you with acquiring additional knowledge and skills to prepare for certification. Please review the exam guide for information about the competencies assessed on the certification exam.

Exam Content
Response Types
There are two types of questions on the examination:
 Multiple choice: Has one correct response and three incorrect responses (distractors).
 Multiple response: Has two or more correct responses out of five or more options.
Select one or more responses that best complete the statement or answer the question. Distractors, or incorrect answers, are response options that an examinee with incomplete knowledge or skill would likely choose. However, they are generally plausible responses that fit in the content area defined by the test objective.

Unanswered questions are scored as incorrect; there is no penalty for guessing.

Unscored Content
Your examination may include non-scored questions that are placed on the test to gather statistical information. These questions are not identified on the form, and do not affect your score.

Exam Results
The AWS Certified Developer – Associate (DVA-C01) examination is a pass or fail exam. The examination is scored against a minimum standard established by AWS professionals guided by certification industry best practices and guidelines.
Your results for the examination are reported as a score from 100–1,000, with a minimum passing score of 720. Your score shows how you performed on the examination as a whole and whether or not you passed. Scaled scoring models are used to equate scores across multiple exam forms that may have slightly different difficulty levels.
Your score report contains a table of classifications of your performance at each section level. This information is designed to provide general feedback concerning your examination performance. The examination uses a compensatory scoring model, which means that you do not need to “pass” the individual sections, only the overall examination. Each section of the examination has a specific weighting, so some sections have more questions than others. The table contains general information, highlighting your strengths and weaknesses. Exercise caution when interpreting section-level feedback.

Content Outline
This exam guide includes weightings, test domains, and objectives only. It is not a comprehensive listing of the content on this examination. The table below lists the main content domains and their weightings.

Domain % of Examination

Domain 1: Deployment 22%
Domain 2: Security 26%
Domain 3: Development with AWS Services 30%
Domain 4: Refactoring 10%
Domain 5: Monitoring and Troubleshooting 12%
TOTAL 100%

Domain 1: Deployment
1.1 Deploy written code in AWS using existing CI/CD pipelines, processes, and patterns
1.2 Deploy applications using Elastic Beanstalk
1.3 Prepare the application deployment package to be deployed to AWS
1.4 Deploy serverless applications

Domain 2: Security

2.1 Make authenticated calls to AWS services
2.2 Implement encryption using AWS services
2.3 Implement application authentication, and authorization

Domain 3: Development with AWS Services
3.1 Write code for serverless applications
3.2 Translate functional requirements into application design
3.3 Implement application design into application code
3.4 Write code that interacts with AWS services by using APIs, SDKs, and AWS CLI

Domain 4: Refactoring
4.1 Optimize application to best use AWS services and features
4.2 Migrate existing application code to run on AWS

Domain 5: Monitoring and Troubleshooting
5.1 Write code that can be monitored
5.2 Perform root cause analysis on faults found in testing or production

QUESTION 1
A Developer created a dashboard for an application using Amazon API Gateway, Amazon S3, AWS Lambda,
and Amazon RDS. The Developer needs an authentication mechanism allowing a user to sign in and view the
dashboard. It must be accessible from mobile applications, desktops, and tablets, and must remember user preferences across platforms.
Which AWS service should the Developer use to support this authentication scenario?

A. AWS KMS
B. Amazon Cognito
C. AWS Directory Service
D. Amazon IAM

Correct Answer: B

QUESTION 2
A Developer has created an S3 bucket s3://mycoolapp and has enabled server across logging that points
to the folder s3://mycoolapp/logs. The Developer moved 100 KB of Cascading Style Sheets (CSS)
documents to the folder s3://mycoolapp/css, and then stopped work. When the developer came back a
few days later, the bucket was 50 GB.
What is the MOST likely cause of this situation?

A. The CSS files were not compressed and S3 versioning was enabled.
B. S3 replication was enabled on the bucket.
C. Logging into the same bucket caused exponential log growth.
D. An S3 lifecycle policy has moved the entire CSS file to S3 Infrequent Access.

Correct Answer: B

QUESTION 3
A Developer is creating an Auto Scaling group whose instances need to publish a custom metric to Amazon CloudWatch.
Which method would be the MOST secure way to authenticate a CloudWatch PUT request?

A. Create an IAM user with PutMetricData permission and put the user credentials in a private repository; have applications pull the credentials as needed.
B. Create an IAM user with PutMetricData permission, and modify the Auto Scaling launch configuration to inject the user credentials into the instance user data.
C. Modify the CloudWatch metric policies to allow the PutMetricData permission to instances from the Auto Scaling group.
D. Create an IAM role with PutMetricData permission and modify the Auto Scaling launching configuration to launch instances using that role.

Correct Answer: D

QUESTION 4
A Developer has written a serverless application using multiple AWS services. The business logic is written as
a Lambda function which has dependencies on third-party libraries. The Lambda function endpoints will be
exposed using Amazon API Gateway. The Lambda function will write the information to Amazon DynamoDB.
The Developer is ready to deploy the application but must have the ability to rollback. How can this
deployment be automated, based on these requirements?

A. Deploy using Amazon Lambda API operations to create the Lambda function by providing a deployment package.
B. Use an AWS CloudFormation template and use CloudFormation syntax to define the Lambda function resource in the template.
C. Use syntax conforming to the Serverless Application Model in the AWS CloudFormation template to define the Lambda function resource.
D. Create a bash script which uses AWS CLI to package and deploy the application.
|
Correct Answer: A

QUESTION 5
What are the steps to using the AWS CLI to launch a templatized serverless application?

A. Use AWS CloudFormation get-template then CloudFormation execute-change-set.
B. Use AWS CloudFormation validate-template then CloudFormation create-change-set.
C. Use AWS CloudFormation package then CloudFormation deploy.
D. Use AWS CloudFormation create-stack then CloudFormation update-stack.

Correct Answer: C

Actualkey Amazon AWS Certified Developer Associate DVA-C01 exam pdf, Certkingdom Amazon AWS Certified Developer Associate DVA-C01 PDF

MCTS Training, MCITP Trainnig

Best Amazon AWS Certified Developer Associate DVA-C01 Certification, Amazon AWS Certified Developer Associate DVA-C01 Training at certkingdom.com