1. A company has a requirement to store 100TB of data to AWS.

This data will be exported using AWS Snowball and needs to then reside in a database layer. The database should have the facility to be queried from a business intelligence application. Each item is roughly 500KB in size. Which of the following is an ideal storage mechanism for the underlying data layer?

  1. AWS DynamoDB
  2. AWS Aurora
  3. AWS RDS
  4. AWS Redshift

解析: For this sheer data size, the ideal storage unit would be AWS Redshift. AWS Documentation mentions the following on AWS Redshift: Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud. You can start with just a few hundred gigabytes of data and scale to a petabyte or more. This enables you to use your data to acquire new insights for your business and customers. The first step to create a data warehouse is to launch a set of nodes, called an Amazon Redshift cluster. After you provision your cluster, you can upload your data set and then perform data analysis queries. Regardless of the size of the data set, Amazon Redshift offers fast query performance using the same SQL-based tools and business intelligence applications that you use today. For more information on AWS Redshift, please refer to the URL below. (https://docs.aws.amazon.com/redshift/latest/mgmt/welcome.html) Option A is incorrect because the maximum item size in DynamoDB is 400KB. Option B is incorrect because Aurora supports 64TB of data. Option C is incorrect because we can create MySQL, MariaDB, SQL Server, PostgreSQL, and Oracle RDS DB instances with up to 16 TiB of storage.

2. A company is planning on testing a large set of IoT enabled devices.

These devices will be streaming data every second. A proper service needs to be chosen in AWS which could be used to collect and analyze these streams in real time. Which of the following could be used for this purpose?

  1. Use AWS EMR to store and process the streams.
  2. Use AWS Kinesis streams to process and analyze the data.
  3. Use AWS SQS to store the data.
  4. Use SNS to store the data.

解析: AWS Documentation mentions the following on Amazon Kinesis: Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data so you can get timely insights and react quickly to new information. Amazon Kinesis offers key capabilities to cost-effectively process streaming data at any scale, along with the flexibility to choose the tools that best suit the requirements of your application. With Amazon Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications. For more information on Amazon Kinesis, please refer to the below URL: (https://aws.amazon.com/kinesis/) Option A: Amazon EMR can be used to process applications with data intensive workloads. Option B: Amazon Kinesis can be used to store, process and analyse real time streaming data. Option C: SQS is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. Option D: SNS is a flexible, fully managed pub/sub messaging and mobile notifications service for coordinating the delivery of messages to subscribing endpoints and clients.

3. Your company currently has a set of EC2 Instances hosted in AWS.

The states of these instances need to be monitored and each state change needs to be recorded. Which of the following can help fulfill this requirement? Choose 2 answers from the options given below.

  1. Use CloudWatch logs to store the state change of the instances.
  2. Use CloudWatch Events to monitor the state change of the events.
  3. Use SQS to trigger a record to be added to a DynamoDB table.
  4. Use AWS Lambda to store a change record in a DynamoDB table.

解析:CloudWatch Events can be used to monitor the state change of EC2 Instances. The Event Source and the Event Type can be chosen (EC2 Instance State-change Notification). An AWS Lambda function can then serve as a target which can then be used to store the record in a DynamoDB table.

CloudWatch Events info: https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/WhatIsCloudWatchEvents.html

4. You have instances hosted in a private subnet in a VPC.

There is a need for the instances to download updates from the Internet. As an architect, what change would you suggest to the IT Operations team which would also be the most efficient and secure?

  1. Create a new public subnet and move the instance to that subnet.
  2. Create a new EC2 Instance to download the updates separately and then push them to therequired instance.
  3. Use a NAT Gateway to allow the instances in the private subnet to download theupdates.
  4. Create a VPC link to the internet to allow the instances in the private subnet todownload the updates.

解析: The NAT Gateway is an ideal option to ensure that instances in the private subnet have the ability to download updates from the Internet. For more information on the NAT Gateway, please refer to the below URL: (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html) Option A is not suitable because there may be a security reason for keeping these instances in the private subnet. (for example: db instances) Option B is also incorrect. The instances in the private subnet may be running various applications and db instances. Hence, it is not advisable or practical for an EC2 Instance to download the updates separately and then push them to the required instance. Option D is incorrect because a VPC link is not used to connect to the Internet.

5. You plan on hosting a web application on AWS.

You create an EC2 Instance in a public subnet which needs to connect to an EC2 Instance that will host an Oracle database. Which of the following steps should be taken to ensure that a secure setup is in place?

Choose 2 answers from the choices below.

  1. Place the EC2 Instance with the Oracle database in the same public subnet as the Webserver for faster communication.
  2. Place the EC2 Instance with the Oracle database in a separate private subnet.
  3. Create a database Security group which allows incoming traffic only from the Web server’s security group.
  4. Ensure that the database security group allows incoming traffic from 0.0.0.0/0

解析: The best and most secure option is to place the database in a private subnet. The below diagram from AWS Documentation shows this setup. Also, you ensure that access is not allowed from all sources but only from the web servers. For more information on this type of setup, please refer to the below URL: (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html) Option A is incorrect because as per the best practice guidelines, db instances are placed in Private subnets and allowed to communicate with web servers in the public subnet. Option D is incorrect because allowing all incoming traffic from the Internet to the db instance is a security risk.

6. Company planning on building and deploying a web application on AWS, needs to have a data store to store session data.

Which of the below services can be used to meet this requirement? Please select 2 correct options.

  1. AWS RDS
  2. AWS SQS
  3. DynamoDB
  4. AWS ElastiCache

解析: Amazon ElastiCache offers fully managed Redis and Memcached. Seamlessly deploy, operate, and scale popular open source compatible in-memory data stores. Build data-intensive apps or improve the performance of your existing apps by retrieving data from high throughput and low latency in-memory data stores. Amazon ElastiCache is a popular choice for Gaming, Ad-Tech, Financial Services, Healthcare, and IoT apps.

7. A company has setup an application in AWS that interacts with DynamoDB.

It is required that when an item is modified in a DynamoDB table, an immediate entry is made to the associating application. How can this be accomplished? Choose 2 answers from the choices below.

  1. Setup CloudWatch to monitor the DynamoDB table for changes. Then trigger a Lambdafunction to send the changes to the application.
  2. Setup CloudWatch logs to monitor the DynamoDB table for changes. Then trigger AWSSQS to send the changes to the application.
  3. Use DynamoDB streams to monitor the changes to the DynamoDB table.
  4. Trigger a lambda function to make an associated entry in the application as soon as the DynamoDB streams are modified

解析: When you enable DynamoDB Streams on a table, you can associate the stream ARN with a Lambda function that you write. Immediately after an item in the table is modified, a new record appears in the table’s stream. AWS Lambda polls the stream and invokes your Lambda function synchronously when it detects new stream records. Since our requirement is to have an immediate entry made to an application in case an item in the DynamoDB table is modified, a lambda function is also required.

DynamoDB streams can be used to monitor the changes to a DynamoDB table.

AWS Documentation mentions the following:

A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.

For more information on DynamoDB streams, please refer to the URL below.

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html

8. A mobile based application requires uploading images to S3.

As an architect, you do not want to make use of the existing web server to upload the images due to the load that it would incur. How can this be handled?

  1. Create a secondary S3 bucket. Then, use an AWS Lambda to sync the contents to the primary bucket.
  2. Use Pre-Signed URLs instead to upload the images.
  3. Use ECS Containers to upload the images.
  4. Upload the images to SQS and then push them to the S3 bucket.

解析: The S3 bucket owner can create Pre-Signed URLs to upload the images to S3. For more information on Pre-Signed URLs, please refer to the URL below. (https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html)

9. An application team needs to quickly provision a development environment consisting of a web and database layer.

Which of the following would be the quickest and most ideal way to get this setup in place?

  1. Create Spot Instances and install the Web and database components.
  2. Create Reserved Instances and install the Web and database components.
  3. Use AWS Lambda to create the web components and AWS RDS for the database layer.
  4. Use Elastic Beanstalk to quickly provision the environment.

解析: AWS Documentation mentions the following: With Elastic Beanstalk, you can quickly deploy and manage applications in the AWS Cloud without worrying about the infrastructure that runs those applications. AWS Elastic Beanstalk reduces management complexity without restricting choice or control. You simply upload your application, and Elastic Beanstalk automatically handles the details of capacity provisioning, load balancing, scaling, and application health monitoring. For more information on AWS Elastic Beanstalk, please refer to the URL below. (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html)

10. A company has an application that stores images and thumbnail images on S3. While the thumbnail images need to be available for download immediately as well as both images and thumbnail images are not accessed that frequently.

Which is the most cost-efficient storage option that meets above mentioned requirements?

  1. Amazon Glacier with Expedited Retrievals.
  2. Amazon S3 Standard Infrequent Access
  3. Amazon EFS
  4. Amazon S3 Standard

11. You have an application hosted on AWS consisting of EC2 Instances launched via an Auto Scaling Group.

You notice that the EC2 Instances are not scaling out on demand. What checks can be done to ensure that the scaling occurs as expected?

  • Ensure that the right metrics are being used to trigger the scale out.
  • Ensure that ELB health checks are being used.
  • Ensure that the instances are placed across multiple Availability Zones.
  • Ensure that the instances are placed across multiple regions.

解析: If your scaling events are not based on the right metrics and do not have the right threshold defined, then the scaling will not occur as you want it to happen. For more information on Auto Scaling Dynamic Scaling, please visit the following URL: (https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html)

11. You have an application hosted on AWS that writes images to an S3 bucket.

The concurrent number of users on the application is expected to reach around 10,000 with approximately 500 reads and writes expected per second. How should the architect maximize Amazon S3 performance?

  1. Prefix each object name with a random string.
  2. Use the STANDARD_IA storage class.
  3. Prefix each object name with the curer nt data.
  4. Enable versioning on the S3 bucket

解析:

If the request rate is high, you can use hash keys or random strings to prefix the object name. In such a case, the partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects.

For more information on how to ensure performance in S3, please visit the following URL:

https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
STANDARD_IA storage class is for infrequent data access. Option C is not a good solution. Versioning does not make any difference to the performance in this case.

12. A company has an entire infrastructure hosted on AWS.

It wants to create code templates used to provision the same set of resources in another region in case of a disaster in the primary region. Which of the following services can help in this regard?

  1. AWS Beanstalk
  2. AWS CloudFormation
  3. AWS CodeBuild
  4. AWS CodeDeploy

解析: AWS Documentation provides the following information to support this requirement: AWS CloudFormation provisions your resources in a safe, repeatable manner, allowing you to build and rebuild your infrastructure and applications, without having to perform manual actions or write custom scripts. CloudFormation takes care of determining the right operations to perform when managing your stack, and rolls back changes automatically if errors are detected. For more information on AWS CloudFormation, please visit the following URL: (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html)

13. A company has a set of EBS Volumes that need to be catered in case of a disaster.

How will you achieve this using existing AWS services effectively?

  1. Create a script to copy the EBS Volume to another Availability Zone.
  2. Create a script to copy the EBS Volume to another region.
  3. Use EBS Snapshots to create the volumes in another region.
  4. Use EBS Snapshots to create the volumes in another Availability Zone.

解析: Options A and B are incorrect because you can’t directly copy EBS Volumes. Option D is incorrect because disaster recovery always looks at ensuring resources are created in another region. AWS Documentation provides the following information to support this requirement: A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume,you can use it to create new volumes in the same region. For more information, see Restoring an Amazon EBS Volume from a Snapshot (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoringvolume.html). You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery. For more information on EBS Snapshots, please visit the following URL: (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html)

14. Your company has a set of EC2 Instances hosted in AWS.

There is a mandate to prepare for disasters and come up with the necessary disaster recovery procedures. Which of the following would help in mitigating the effects of a disaster for the EC2 Instances?

  1. Place an ELB in front of the EC2 Instances.
  2. Use Auto Scaling to ensure the minimum number of instances are always running.
  3. Use CloudFront in front of the EC2 Instances.
  4. Use AMIs to recreate the EC2 Instances in another region.

解析: You can create an AMI from the EC2 Instances and then copy them to another region. In case of a disaster, an EC2 Instance can be created from the AMI. Options A and B are good for fault tolerance, but cannot help completely in disaster recovery for the EC2 Instances. Option C is incorrect because we cannot determine if CloudFront would be helpful in this scenario or not without knowing what is hosted on the EC2 Instance. For disaster recovery, we have to make sure that we can launch instances in another region when required. Hence, options A,B and C are not feasible solutions. For more information on AWS AMIs, please visit the following URL: (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html)

15. A company currently hosts a Redshift cluster in AWS.

For security reasons, it should be ensured that all traffic from and to the Redshift cluster does not go through the Internet.

Which of the following features can be used to fulfill this requirement in an efficient manner?

  1. Enable Amazon Redshift Enhanced VPC Routing.
  2. Create a NAT Gateway to route the traffic.
  3. Create a NAT Instance to route the traffic.
  4. Create a VPN Connection to ensure traffic does not flow through the Internet.

解析: AWS Documentation mentions the following:
When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC. If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within the AWS network.
For more information on Redshift Enhanced Routing, please visit the following URL:
https://docs.aws.amazon.com/redshift/latest/mgmt/enhanced-vpc-routing.html

16. A company with a set of Admin jobs currently setup in the C# programming language, is moving their infrastructure to AWS.

Which of the following would be an efficient means of hosting the Admin related jobs in AWS?

  1. Use AWS DynamoDB to store the jobs and then run them on demand.
  2. Use AWS Lambda functions with C# for the Admin jobs.
  3. Use AWS S3 to store the jobs and then run them on demand.
  4. Use AWS Config functions with C# for the Admin jobs.

解析:

The best and most efficient option is to host the jobs using AWS Lambda. This service has the facility to have the code run in the C# programming language.

AWS Lambda is a compute service that lets you run code without provisioning or managing servers. AWS Lambda executes your code only when needed and scales automatically, from a few requests per day to thousands per second.
You pay only for the compute time you consume – there is no charge when your code is not running. With AWS Lambda, you can run code for virtually any type of application or backend service – all with zero administration.

17. Your company has a set of resources hosted on the AWS Cloud.

As a part of the new governing model, there is a requirement that all activity on AWS resources should be monitored. What is the most efficient way to have this implemented?

  1. Use VPC Flow Logs to monitor all activity in your VPC.
  2. Use AWS Trusted Advisor to monitor all of your AWS resources.
  3. Use AWS Inspector to inspect all of the resources in your account.
  4. Use AWS CloudTrail to monitor all API activity.

解析: AWS Documentation mentions the following on AWS CloudTrail: AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. With CloudTrail, you can log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. CloudTrail provides event history of your AWS account activity,including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services. This event history simplifies security analysis, resource change tracking, and troubleshooting. Visibility into your AWS account activity is a key aspect of security and operational best practices. You can use CloudTrail to view, search, download, archive, analyze, and respond to account activity across your AWS infrastructure. You can identify who or what took which action, what resources were acted upon, when the event occurred, and other details to help you analyze and respond to activity in your AWS account. You can integrate CloudTrail into applications using the API, automate trail creation for your organization,check the status of trails you create, and control how users view CloudTrail events. More information is available at the below URLs: (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-user-guide.html) (https://aws.amazon.com/cloudtrail/)

18. Below are the requirements for a data store in AWS:

a) Ability to perform SQL queries
b) Integration with existing business intelligence tools
c) High concurrency workload that generally involves reading and writing all columns for a small number of records at a time

Which of the following would be an ideal data store for the above requirements? Choose 2 answers from the options below.

  1. AWS Redshift
  2. AWS RDS
  3. AWS Aurora
  4. AWS S3

解析:

Amazon Aurora is a MySQL and PostgreSQL compatible relational database built for the cloud, that combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source databases. Aurora Multi-Master adds the ability to scale out write performance across multiple Availability
Zones, allowing applications to direct read/write workloads to multiple instances in a database cluster and operate with higher availability.

Because Amazon Redshift is a SQL-based relational database management
system (RDBMS), it is compatible with other RDBMS applications and business intelligence tools. Although Amazon Redshift provides the functionality of a typical RDBMS, including online transaction processing (OLTP) functions, it is not designed for these workloads. If you expect a high concurrency workload that generally involves reading and writing all of the columns for a small number of records at a time you should instead consider using Amazon RDS or Amazon DynamoDB.

19. A company currently uses Redshift in AWS.

The Redshift cluster is required to be used in a cost-effective manner. As an architect, which of the following would you consider to ensure cost-effectiveness?

  1. Use Spot Instances for the underlying nodes in the cluset r.
  2. Ensure that unnecessary manual snapshots of the clusetr are deleted.
  3. Ensure VPC Enhanced Routing is enabled.
  4. Ensure that CloudWatch metrics are disabled.

解析:

Amazon Redshift provides free storage for snapshots that is equal to the storage capacity of your cluster until you delete the cluster. After you reach the free snapshot storage limit, you are charged for any additional storage at the normal rate. Because of this, you should evaluate how many days you need to keep automated snapshots and configure their retention period accordingly, and delete any manual snapshots that you no longer need.
Redshift pricing is based on the following elements. Compute node hours Backup Storage Data transfer – There is no data transfer charge for data transferred to or from Amazon Redshift and Amazon S3 within the same AWS Region. For all other data transfers into and out of Amazon Redshift, you will be billed at standard AWS data transfer rates.
Data scanned

There is no additional charge for using Enhanced VPC Routing. You might incur additional data transfer charges for certain operations, such as UNLOAD to Amazon S3 in a different region or COPY from Amazon EMR or SSH with public IP addresses.
Enhanced VPC routing does not incur any cost but any Unload operation to a different region will incur a cost. With Enhanced VPC routing or with out it any data transfer to a different region does incur cost. But with Storage, increasing your backup retention period or taking additional snapshots increases the backup storage consumed by your data warehouse. There is no additional charge for backup storage up to 100% of your provisioned storage for an active data warehouse cluster. Any amount of storage exceeding this limit does incur cost.

20. A company has a set of resources hosted in an AWS VPC.

Having acquired another company with its own set of resources hosted in AWS, it is required to ensure that resources in the VPC of the parent company can access the resources in the VPC of the child company. How can this be accomplished?

  1. Establish a NAT Instance to establish communication across VPCs.
  2. Establish a NAT Gateway to establish communication across VPCs.
  3. Use a VPN Connection to peer both VPCs.
  4. Use VPC Peering to peer both VPCs.

解析: AWS Documentation mentions the following about VPC Peering: A VPC Peering Connection is a networking connection between two VPCs that enables you to route traffic between them privately. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC Peering Connection between your own VPCs, with a VPC in another AWS account, or with a VPC in a different AWS region. For more information on VPC Peering, please visit the following URL: (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-peering.html) NAT Instance, NAT Gateway and VPN do not allow for VPC-VPC connectivitity

21. An application consists of the following architecture:

a. EC2 Instances in a single AZ behind an ELB
b. A NAT Instance which is used to ensure that instances can download updates from the Internet

Which of the following can be used to ensure better fault tolerance in this setup? Choose 2 answers from the options given below.

  1. Add more instances in the existing Availability Zone.
  2. Add an Auto Scaling Group to the setup.
  3. Add more instances in another Availability Zone.
  4. Add another ELB for more fault tolerance.

解析: AWS Documentation mentions the following: Adding Auto Scaling to your application architecture is one way to maximize the benefits of the AWS Cloud. When you use Auto Scaling, your applications gain the following benefits: Better fault tolerance. Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it. You can also configure Auto Scaling to use multiple Availability Zones. If one Availability Zone becomes unavailable, Auto Scaling can launch instances in another one to compensate. Better availability. Auto Scaling can help you ensure that your application always has the right amount of capacity to handle the current traffic demands. For more information on the benefits of Auto Scaling, please visit the following URL: (https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html)

22. A company has a lot of data hosted on their On-premises infrastructure.

Running out of storage space, the company wants a quick win solution using AWS. Which of the following would allow easy extension of their data infrastructure to AWS?

  1. The company could start using Gateway Cached Volumes.
  2. The company could start using Gateway Stored Volumes.
  3. The company could start using the Simple Storage Service.
  4. The company could start using Amazon Glacier.

解析: Volume Gateways and Cached Volumes can be used to start storing data in S3. AWS Documentation mentions the following: You store your data in Amazon Simple Storage Service (Amazon S3) and retain a copy of frequently accessed data subsets locally. Cached volumes offer a substantial cost savings on primary storage and minimize the need to scale your storage on-premises. You also retain low-latency access to your frequently accessed data. For more information on Storage Gateways, please visit the following URL: (https://docs.aws.amazon.com/storagegateway/latest/userguide/WhatIsStorageGateway.html) Note: The question states that they are running out of storage space and they need a solution to store data with AWS rather than a backup. So for this purpose, gateway-cached volumes are appropriate which will help them to avoid scaling their on-premises data center and allows them to store on AWS storage service while having the most recent files available for them at low latency. This is the difference between Cached and stored volumes: Cached volumes – You store your data in S3 and retain a copy of frequently accessed data subsets locally. Cached volumes offer substantial cost savings on primary storage and “minimize the need to scale your storage onpremises. You also retain low-latency access to your frequently accessed data.” Stored volumes – If you need low-latency access to your entire data set, first configure your on-premises gateway to store all your data locally. Then asynchronously back up point-in-time snapshots of this data to Amazon S3. “This configuration provides durable and inexpensive offsite backups that you can recover to your local data center or Amazon EC2.” For example, if you need replacement capacity for disaster recovery, you can recover the backups to Amazon EC2. As described in the answer: The company wants a quick win solution to store data with aws avoiding scaling the on-premise setup rather than backing up the data.

23. A company has a set of EC2 Instances hosted on the AWS Cloud.

These instances form a web server farm which services a web application accessed by users on the Internet.

Which of the following would help make this architecture more fault tolerant? Choose 2 answers from the options given below.

  1. Ensure the instances are placed in separate Availability Zones.
  2. Ensure the instances are placed in separate regions.
  3. Use an AWS Load Balancer to distribute the traffic.
  4. Use Auto Scaling to distribute the traffic.

解析: A load balancer distributes incoming application traffic across multiple EC2 Instances in multiple Availability Zones. This increases the fault tolerance of your applications. Elastic Load Balancing detects unhealthy instances and routes traffic only to healthy instances.
You can automatically increase the size of your Auto Scaling group when demand goes up and decrease it when demand goes down. As the Auto Scaling group adds and removes EC2 instances, you must ensure that the traffic for your application is distributed across all of your EC2 instances. The Elastic Load Balancing service automatically routes incoming web traffic across such a dynamically changing number of EC2 instances. Your load balancer acts as a single point of contact for all incoming traffic to the instances in your Auto Scaling group.
To use a load balancer with your Auto Scaling group, create the load balancer and then attach it to the group

24. A company stores its log data in an S3 bucket.

There is a current need to have search capabilities available for the data in S3. How can this be achieved in an efficient and ongoing manner? Choose 2 answers from the options below. Each answer forms a part of the solution.

  1. Use an AWS Lambda function which gets triggered whenever data is added to the S3bucket.
  2. Create a Lifecycle Policy for the S3 bucket.
  3. Load the data into Amazon Elasticsearch.
  4. Load the data into Glacier.

解析: AWS Elasticsearch provides full search capabilities and can be used for log files stored in the S3 bucket. AWS Documentation mentions the following with regard to the integration of Elasticsearch with S3: You can integrate your Amazon ES domain with Amazon S3 and AWS Lambda. Any new data sent to an S3 bucket triggers an event notification to Lambda, which then runs your custom Java or Node.js application code. After your application processes the data, it streams the data to your domain. For more information on integration between Elasticsearch and S3, please visit the following URL: (https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/es-aws-integrations.html)

25. A company plans on deploying a batch processing application in AWS.

Which of the following is an ideal way to host this application? Choose 2 answers from the options below. Each answer forms a part of the solution.

  1. Copy the batch processing application to an ECS Container.
  2. Create a docker image of your batch processing application.
  3. Deploy the image as an Amazon ECS task.
  4. Deploy the container behind the ELB.

解析: Docker containers are particularly suited for batch job workloads. Batch jobs are often short-lived and embarrassingly parallel. You can package your batch processing application into a Docker image so that you can deploy it anywhere, such as in an Amazon ECS task.

26. Your company currently has a web distribution hosted using the AWS CloudFront service.

TheIT Security department has confirmed that the application using this web distribution now falls under the scope of PCI compliance. What are the possible ways to meet the requirements? Choose two answers from the choices below.

  1. Enable CloudFront access logs.
  2. Enable Cache in CloudFront.
  3. Capture requests that are sent to the CloudFront API.
  4. Enable VPC Flow Logs

解析: AWS Documentation mentions the following: If you run PCI or HIPAA-compliant workloads based on the AWS Shared Responsibility Model(https://aws.amazon.com/compliance/shared-responsibility-model/), we recommend that you log your CloudFront usage data for the last 365 days for future auditing purposes. To log usage data, you can do the following: Enable CloudFront access logs. Capture requests that are sent to the CloudFront API. For more information on compliance with CloudFront, please visit the following URL: (https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/compliance.html) Option B helps to reduce latency. Option D – VPC flow logs capture information about the IP traffic going to and from network interfaces in a VPC but not for CloudFront.

27. A customer planning on hosting an AWS RDS instance, needs to ensure that the 2019/2/13 Whizlabs Online Certification Training Courses for Professionals (AWS, Java, PMP) https://www.whizlabs.com/learn/course/quiz-result/304797 3/64 underlying data is encrypted.

How can this be achieved? Choose 2 answers from the options given below.

  1. Ensure that the right instance class is chosen for the underlying instance.
  2. Choose only General Purpose SSD since only this volume type supports encryption ofdata.
  3. Encrypt the database during creation.
  4. Enable encryption of the underlying EBS Volume.

28. A database hosted in AWS is currently encountering an extended number of write operations and is not able to handle the load.

What can be done to the architecture to ensure that the write operations are not lost under any circumstance?

  1. Add more IOPS to the existing EBS Volume used by the database.
  2. Consider using DynamoDB instead of AWS RDS.
  3. Use SQS FIFO to queue the database writes.
  4. Use SNS to send notification on missed database writes and then add them manually at a later stage.

29. Your company is planning on using Route 53 as the DNS provider.

There is a need to ensure that the company’s domain name points to an existing CloudFront distribution.

How can this be achieved?

  1. Create an Alias record which points to the CloudFront distribution.
  2. Create a host record which points to the CloudFront distribution.
  3. Create a CNAME record which points to the CloudFront distribution.
  4. Create a Non-Alias Record which points to the CloudFront distribution.

解析: AWS Documentation mentions the following: While ordinary Amazon Route 53 records are standard DNS records, alias records provide a Route 53–specific extension to DNS functionality. Instead of an IP address or a domain name, an alias record contains a pointer to a CloudFront distribution, an Elastic Beanstalk environment, an ELB Classic, Application, or Network Load Balancer, an Amazon S3 bucket that is configured as a static website, or another Route 53 record in the same hosted zone. When Route 53 receives a DNS query that matches the name and type in an alias record,Route 53 follows the pointer and responds with the applicable value. For more information on Route 53 Alias records, please visit the following URL:(https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/resource-record-setschoosing-alias-non-alias.html) Note: Route 53 uses “Alias Name” to connect to the CloudFront, reason Alias Record is a Route 53 extension to DNS. Also, alias record is similar to CNAME record, but the main difference is – you can create alias record for both root domain & subdomain, where as CNAME record can be created only to subdomain. Check the below link from Amazon: – (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-cloudfront-distribution.html)

30. A company needs to extend their storage infrastructure to the AWS Cloud.

The storage needs to be available as iSCSI devices for on-premises application servers. Which of the following would be able to fulfill this requirement?

  1. Create a Glacier vault. Use a Glacier Connector and mount it as an iSCSI device.
  2. Create an S3 bucket. Use an S3 Connector and mount it as an iSCSI device.
  3. Use the EFS file service and mount the different file systems to the on-premises servers.
  4. Use the AWS Storage Gateway-cached volumes service.

解析: AWS Documentation mentions the following: By using cached volumes, you can use Amazon S3 as your primary data storage, while retaining frequently accessed data locally in your storage gateway. Cached volumes minimize the need to scale your on-premises storage infrastructure, while still providing your applications with low-latency access to their frequently accessed data. You can create storage volumes up to 32 TiB in size and attach to them as iSCSI devices from your on-premises application servers. Your gateway stores data that you write to these volumes in Amazon S3 and retains recently read data in your on-premises storage gateway’s cache and upload buffer storage. For more information on AWS Storage Gateways, please visit the following URL: (https://docs.aws.amazon.com/storagegateway/latest/userguide/StorageGatewayConcepts.html)

31. Your company has a set of applications that make use of Docker containers used by the Development team.

There is a need to move these containers to AWS. Which of the following methods could be used to set up these Docker containers in a separate environment in AWS?

  1. Create EC2 Instances, install Docker and then upload the containers.
  2. Create EC2 Container registries, install Docker and then upload the containers.
  3. ThCreate an Elastic Beanstalk environment with the necessary Docker containers.
  4. Create EBS Optimized EC2 Instances, install Docker and then upload the containers.

解析: The Elastic Beanstalk service can be used to host Docker containers. AWS Documentation further mentions the following: Elastic Beanstalk supports the deployment of web applications from Docker containers. With Docker containers, you can define your own runtime environment. You can choose your own platform, programming language, and any application dependencies (such as package managers or tools), that aren’t supported by other platforms. Docker containers are self-contained and include all the configuration information and software your web application requires to run. For more information on using Elastic Beanstalk for Docker containers, please visit the following URL: (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html) Note: Option A could be partly correct as we need to install docker on EC2 instance. In addition to this, you need to create an ECS Task definition which details the docker image that we need to use for containers and how many containers to be used as well as the resource allocation for each container. But with Option C, we have the added advantage that, If a Docker container running in an Elastic Beanstalk environment crashes or is killed for any reason, Elastic Beanstalk restarts it automatically. In the question we have been asked about the best method to set up docker containers, hence Option C seems o be more appropriate. More information is available at: (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.html) (https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/)

32. Instances in your private subnet hosted in AWS, need access to important documents in S3.

Due to the confidential nature of these documents, you have to ensure that this traffic does not traverse through the internet. As an architect, how would you you implement this solution?

  1. Consider using a VPC Endpoint.
  2. Consider using an EC2 Endpoint.
  3. Move the instances to a public subnet.
  4. Create a VPN connection and access the S3 er sources from the EC2 Instance.

解析: AWS Documentation mentions the following: A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other services does not leave the Amazon network. For more information on VPC Endpoints, please visit the following URL: (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html)

33. You have a video transcoding application running on Amazon EC2. Each instance polls a queue to find out which video should be transcoded, and then runs a transcoding process.

If this process is interrupted, the video gets transcoded by another instance based on the queuing system. You have a large backlog of videos that need to be transcoded and you would like to reduce this backlog by adding more instances. These instances will only be needed until the backlog is reduced. What Amazon EC2 Instance type should you use to reduce the backlog in the most cost-efficient way?

  1. Reserved Instances
  2. Spot Instances
  3. Dedicated Instances
  4. On-Demand Instances

34. A company has a workflow that sends video files from their on-premises system to AWS for transcoding.

They use EC2 worker instances to pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?

  1. SQS guarantees the order of the messages.
  2. SQS synchronously provides transcoding output.
  3. SQS checks the health of the worker instances.
  4. SQS helps to facilitate horizontal scaling of encoding tasks.

解析:Even though SQS guarantees the order of messages for FIFO queues, the main reason for using it is because it helps in horizontal scaling of AWS resources and is used for decoupling systems. SQS can neither be used for transcoding output nor for checking the health of worker instances. The health of worker instances can be checked via ELB or CloudWatch.

35. A company wants to create standard templates for deployment of their Infrastructure.

These would also be used to provision resources in another region during disaster recovery scenarios. Which AWS service can be used in this regard?

  1. Amazon Simple Workflow Service
  2. AWS Elastic Beanstalk
  3. AWS CloudFormation
  4. AWS OpsWorks

36. You have a set of EC2 Instances that support an application.

They are currently hosted in the US Region. In the event of a disaster, you need a way to ensure that you can quickly provision the resources in another region. How could this be accomplished? Choose 2 answers from the options given below.

  1. Copy the underlying EBS Volumes to the destination region.
  2. Create EBS Snapshots and then copy them to the destination region.
  3. Create AMIs for the underlying instances.
  4. Copy the metadata for the EC2 Instances to S3.

37. A company wants to build a brand new application on the AWS Cloud.

They want to ensure that this application follows the Microservices architecture. Which of the following services can be used to build this sort of architecture? Choose 3 answers from the options given below.

  1. AWS Lambda
  2. AWS ECS
  3. AWS API Gateway
  4. AWS Config

AWS Lambda is a serverless compute service that allows you to build independent services. The Elastic Container service (ECS) can be used to manage containers. The API Gateway is a serverless component for managing access to APIs.
For more information about Microservices on AWS, please visit the following URL:
https://aws.amazon.com/microservices/

38. A company is planning on using the AWS Redshift service.

The Redshift service and data on it would be used continuously for the next 3 years as per the current business plan.

Which of the following would be the most cost-effective solution in this scenario?

  1. Consider using On-demand instances for the Redshift Cluster.
  2. Enable Automated backup.
  3. Consider using Reserved Instances for the Redshift Cluster.
  4. Consider not using a cluster for the Redshift nodes.

解析: AWS Documentation mentions the following: If you intend to keep your Amazon Redshift cluster running continuously for a prolonged period, you should consider purchasing reserved node offerings. These offerings provide significant savings over on-demand pricing, but they require you to reserve compute nodes and commit to paying for those nodes for either a oneyear or three-year duration. For more information on Reserved Nodes in Redshift, please visit the following URL: (https://docs.aws.amazon.com/redshift/latest/mgmt/purchase-reserved-node-instance.html)

39. A company is planning to run a number of Admin related scripts using the AWS Lambda service.

There is a need to detect errors that occur while the scripts run. How can this be accomplished in the most effective manner?

  1. Use CloudWatch metrics and logs to watch for errors.
  2. Use CloudTrail to monitor for errors.
  3. Use the AWS Config service to monitor for errors.
  4. Use the AWS Inspector service to monitor for errors.

解析: AWS Documentation mentions the following:
AWS Lambda automatically monitors Lambda functions on your behalf, reporting metrics through Amazon CloudWatch. To help you troubleshoot failures in a function, Lambda logs all requests handled by your function and also automatically stores logs generated by your code through Amazon CloudWatch Logs.
For more information on Monitoring Lambda functions, please visit the following URL:
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-logs.html

40. A CloudFront distribution is being used to distribute content from an S3 bucket.

It is required that only a particular set of users get access to certain content. How can this be accomplished?

  1. Create IAM Users for each user and then provide access to the S3 bucket content.
  2. Create IAM Groups for each set of users and then provide access to the S3 bucketcontent.
  3. Create CloudFront signed URLs and then distribute these URLs to the users.
  4. Use IAM Polices for the underlying S3 buckets to restrict content.

解析: AWS Documentation mentions the following:Many companies that distribute content via the internet want to restrict access to documents, business data, media streams, or content that is intended for selected users, for example, users who have paid a fee. To securely serve this private content using CloudFront, you can do the following:
Require that your users access your private content by using special CloudFront signed URLs or signed cookies.Require that your users access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs. Requiring CloudFront URLs isn’t required, but we recommend it to prevent users from bypassing the restrictions that you specify in signed URLs or signed cookies.
For more information on serving private content via CloudFront, please visit the following URL:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html#

41. A company’s requirement is to have a Stack-based model for its resources in AWS.

There is a need to have different stacks for the Development and Production environments.

Which of the following can be used to fulfill this required methodology?

  1. Use EC2 tags to define different stack layers for your resources.
  2. Define the metadata for the different layers in DynamoDB.
  3. Use AWS OpsWorks to define the different layers for your application.
  4. Use AWS Config to define the different layers for your application.

解析: The requirement can be fulfilled via the OpsWorks service. The AWS Documentation given below supports this requirement: AWS OpsWorks Stacks lets you manage applications and servers on AWS and on-premises. With OpsWorks Stacks, you can model your application as a stack containing different layers, such as load balancing, database,and application server. You can deploy and configure Amazon EC2 instances in each layer or connect other resources such as Amazon RDS databases. For more information on OpsWorks stacks, please visit the following URL: (https://aws.amazon.com/opsworks/stacks/) A stack is basically a collection of instances that are managed together for serving a common task. Consider a sample stack whose purpose is to serve web applications. It will be comprised of the following instances. A set of application server instances, each of which handles a portion of the incoming traffic. A load balancer instance, which takes incoming traffic and distributes it across the application servers. A database instance, which serves as a back-end data store for the application servers. A common practice is to have multiple stacks that represent different environments. A typical set of stacks consists of: A development stack to be used by developers to add features, fix bugs, and perform other development and maintenance tasks. A staging stack to verify updates or fixes before exposing them publicly. A production stack, which is the public-facing version that handles incoming requests from users. For more information, please see the link given below: (https://docs.aws.amazon.com/opsworks/latest/userguide/workingstacks.html)

42. You are designing a web application that stores static assets in an Amazon Simple Storage Service (S3) bucket.

You expect this bucket to receive over 150 PUT requests per second. What should you do to ensure optimal performance?

  1. Use Multipart upload.
  2. Add a random prefix to the key names.
  3. Amazon S3 will automatically manage performance at this scale.
  4. Use a predictable naming scheme, such as sequential numbers or date time sequences in the key names.

解析: Based on the New S3 announcement (S3 performance)Amazon S3 now provides increased request rate performance. But AWS not yet updated the exam Questions. So as per exam Option B is the correct answer.
https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
One way to introduce randomness to key names is to add a hash string as prefix to the key name. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key name. From the hash, pick a specific number of characters, and add them as the prefix to the key name.
Multipart upload allows you to upload a single object as a set of parts. Each part is a contiguous portion of the object’s data. You can upload these object parts independently and in any order. If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these parts and creates the object. In general, when your object size reaches 100 MB, you should consider using multipart uploads instead of uploading the object in a single operation.

If your workload in a S3 bucket routinely EXCEEDS 100 PUT/LIST/DELETE requests per second or more than 300 GET requests per second then you need to perform some guidelines for your S3 bucket.

One way to add a HASH PREFIX key to the Key Name

http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html

43. When managing permissions for the API Gateway, what can be used to ensure that the right level of permissions are given to Developers, IT Admins and users?

These permissions should be easily managed.

  1. Use the secure token service to manage the permissions for different users.
  2. Use IAM Policies to create different policies for different types of users.
  3. Use the AWS Config tool to manage the permissions for different users.
  4. Use IAM Access Keys to create sets of keys for different types of users.

解析: AWS Documentation mentions the following: You control access to Amazon API Gateway with IAM permissions (http://docs.aws.amazon.com/IAM/latest/UserGuide/access_permissions.html) by controlling access to the following two API Gateway component processes: To create, deploy, and manage an API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway. To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway. For more information on permissions with the API Gateway, please visit the following URL: (https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html)

44. Your Development team wants to start making use of EC2 Instances to host their Application and Web servers.

In the space of automation, they want the Instances to always download the latest version of the Web and Application servers when they are
launched. As an architect, what would you recommend for this scenario?

  1. Ask the Development team to create scripts which can be added to the User Data section when the instance is launched.
  2. Ask the Development team to create scripts which can be added to the Meta Data section when the instance is launched.
  3. Use Auto Scaling Groups to install the Web and Application servers when the instances are launched.
  4. Use EC2 Config to install the Web and Application servers when the instances are launched.

解析: AWS Documentation mentions the following:
When you launch an instance in Amazon EC2, you have the option of passing user data to the instance that can be used to perform common automated configuration tasks and even run scripts after the instance starts. You can pass two types of user data to Amazon EC2: shell scripts and cloud-init directives. You can also pass this data into the launch wizard as plain text, as a file (this is useful for launching instances using the command line tools), or as base64-encoded text (for API calls).
For more information on User Data, please visit the following URL:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html

45. You have a business-critical two-tier web application currently deployed in 2 Availability Zones in a single region, using Elastic Load Balancing and Auto Scaling.

The app depends on synchronous replication at the database layer. The application needs to remain fully available even if one application AZ goes offline and if Auto Scaling cannot launch new instances in the remaining AZ. How can the current architecture be enhanced to ensure this?

  1. Deploy in 2 regions using Weighted Round Robin with Auto Scaling minimums set at 50%peak load per region.
  2. Deploy in 3 AZ with Auto Scaling minimum set to handle 33 percent peak load per zone.
  3. Deploy in 3 AZ with Auto Scaling minimum set to handle 50 percent peak load per zone.
  4. Deploy in 2 regions using Weighted Round Robin with Auto Scaling minimums set at 100%peak load per region. Ask our Experts  

解析: Since the requirement states that the application should never go down even if an AZ is not available, we need to maintain 100% availability. Options A and D are incorrect because region deployment is not possible for ELB. ELBs can manage traffic within a region and not between regions. Option B is incorrect because even if one AZ goes down, we would be operating at only 66% and not the required 100%. For more information on Auto Scaling, please visit the below URL: (https://aws.amazon.com/autoscaling/) NOTE: In the question, it clearly mentioned that ” The application needs to remain fully available even if one application AZ goes offline and if Auto Scaling cannot launch new instances in the remaining AZ.” Here you need to maintain 100% availability. In option B, when you create 3 AZs with minimum 33% load on each, If any failure occurs in one AZ then 33% + 33% =66% . Here you can handle only 66% and remaining 34% of load not handeling. But when you select option C, when you create 3 AZs with minimum 50% load on each, If any failure occurs in one AZ then 50% + 50% =100% . Here you can handle full load i.e 100%.

46. A customer wants to import their existing virtual machines to the cloud.

Which service can they use for this? Choose one answer from the options given below.

  1. VM Import/Export
  2. AWS Import/Export
  3. AWS Storage Gateway
  4. DB Migration Service

解析: VM Import/Export enables customers to import Virtual Machine (VM) images in order to create Amazon EC2 instances. Customers can also export previously imported EC2 instances to create VMs. Customers can use VM Import/Export to leverage their previous investments in building VMs by migrating their VMs to Amazon EC2.
For more information on AWS VM Import, please visit the URL:
https://aws.amazon.com/ec2/vm-import/

47. A company is running three production web server reserved EC2 Instances with EBSbacked root volumes.

These instances have a consistent CPU load of 80%. Traffic is being distributed to these instances by an Elastic Load Balancer. They also have production and development Multi-AZ RDS MySQL databases. What recommendation would you make to reduce cost in this environment without affecting availability of mission-critical systems? Choose the correct answer from the options given below.

  1. Consider using On-demand instances instead of Reserved EC2 instances.
  2. Consider not using a Multi-AZ RDS deployment for the development database.
  3. Consider using Spot instances instead of Reserved EC2 instances.
  4. Consider removing the Elastic Load Balancer

解析: Multi-AZ RDS databases minimize downtime, provide high availability, and reduce performance impact during backups, so they are highly recommended for production systems. Development systems may be able to get by with a standard RDS deployment, since uptime and performance requirements may be lower in a development environment.

48. An application consists of a couple of EC2 Instances.

One EC2 Instance hosts a web application and the other Instance hosts the database server. Which of the following changes can be made to ensure high availability of the database layer?

  1. Enable Read Replicas for the database.
  2. Enable Multi-AZ for the database.
  3. Have another EC2 Instance in the same Availability Zone with replication configured.
  4. Have another EC2 Instance in the another Availability Zone with replicationconfigured.

解析: Since this is a self-managed database and not an AWS RDS instance, options A and B are incorrect. To ensure high availability, have the EC2 Instance in another Availability Zone, so even if one goes down, the other one will still be available. One can refer to the following media link for achieving high availability in AWS. https://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_ftha_04.pdf

49. You want to host a static website on aws.

As a Solutions architect, you have been given a task to establish a serverless architecture for that. Which of the following could be included in the proposed architecture? Choose 2 answers from the options given below.

  1. Use DynamoDB to store data in tables.
  2. Use EC2 to host the data on EBS Volumes.
  3. Use the Simple Storage Service to store data.
  4. Use AWS RDS to store the data.

50. A company hosts data in S3.

There is now a mandate that going forward, all data in the S3 bucket needs to be encrypted at rest. How can this be achieved?

  1. Use AWS Access Keys to encrypt the data.
  2. Use SSL Certificates to encrypt the data.
  3. Enable Server-side encryption on the S3 bucket.
  4. Enable MFA on the S3 bucket.

解析: AWS Documentation mentions the following: Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it. As long as you authenticate your request and you have access permissions, there is no difference in the way you access encrypted or unencrypted objects. For more information on S3 Server-side encryption, please refer to the below link: (https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html)

51. A company hosts data in S3.

There is a requirement to control access to the S3 buckets.

Which are the 2 ways in which this can be achieved?

  1. Use Bucket Policies.
  2. Use the Secure Token Service.
  3. Use IAM user policies .
  4. Use AWS Access Keys.

解析: Amazon S3 offers access policy options broadly categorized as resource-based policies and user policies. Access policies you attach to your resources (buckets and objects) are referred to as resource-based policies. For example, bucket policies and access control lists (ACLs) are resource-based policies. You can also attach access policies to users in your account. These are called user policies. You may choose to use resource-based policies, user policies, or some combination of these to manage permissions to your Amazon S3 resources.

52. Your application provides data transformation services.

Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of Spot EC2 Instances. Files submitted by your premium customers must be transformed with the highest priority. How would you implement such a system?

  1. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
  2. Use Route 53 latency-based routing to send high priority tasks to the closest transformation instances.
  3. Use two SQS queues, one for high priority messages, the other of r default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.
  4. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.

解析: Work Queues: Decouple components of a distributed application that may not all process the same amount of work simultaneously.
Buffer and Batch Operations: Add scalability and reliability to your architecture, and smooth out temporary volume spikes without losing messages or increasing latency.
Request Offloading: Move slow operations off of interactive request paths by enqueing the request.
Fanout: Combine SQS with Simple Notification Service (SNS) to send identical copies of a message to multiple queues in parallel.
Priority: Use separate queues to provide prioritization of work.
Scalability: Because message queues decouple your processes, it’s easy to scale up the send or receive rate of messages – simply add another process.
Resiliency: When part of your system fails, it doesn’t need to take the entire system down. Message queues decouple components of your system, so if a process that is reading messages from the queue fails, messages can still be added to the queue to be processed when the system recovers.

53. A company is planning to use the AWS ECS service to work with containers.

There is a need for the least amount of administrative overhead while launching containers. How can this be achieved?

  1. Use the Fargate launch type in AWS ECS .
  2. Use the EC2 launch type in AWS ECS.
  3. Use the Auto Scaling launch type in AWS ECS.
  4. Use the ELB launch type in AWS ECS.

解析: AWS Documentation mentions the following: The Fargate launch type allows you to run your containerized applications without the need to provision and manage the backend infrastructure. Just register your task definition and Fargate launches the container for you. For more information on the different launch types, please visit the link: (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html)

54. You currently manage a set of web servers hosted on EC2 Servers with public IP addresses.

These IP addresses are mapped to domain names. There was an urgent maintenance activity that had to be carried out on the servers and the servers had to be stopped and restarted. Now the web application hosted on these EC2 Instances is not accessible via the domain names configured earlier. Which of the following could be a reason for this?

  1. The Route 53 hosted zone needs to be restarted.
  2. The network interfaces need to initialized again.
  3. The public IP addresses need to associated to the ENI again.
  4. The public IP addresses have changed after the instance was stopped and started.

解析: By default, the public IP address of an EC2 Instance is released after the instance is stopped and started. Hence, the earlier IP address which was mapped to the domain names would have become invalid now.
For more information on public IP addressing, please visit the below URL:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-instance-addressing.html#concepts-public-addresses

55. You have both production and development based instances running on your VPC.

It is required to ensure that people responsible for the development instances do not have access to work on production instances for better security. Which of the following would be the best way to accomplish this using policies? Choose the correct answer from the options given below.

  1. Launch the development and production instances in separate VPCs and use VPC Peering.
  2. Create an IAM Policy with a condition that allows access to only those instances which are used for production or development.
  3. Launch the development and production instances in different Availability Zones and use Multi-Factor Authentication.
  4. Define the tags on the Development and production servers and add a condition to the IAMPolicy which allows access to specific tags.

解析: You can easily add tags to define which instances are production and which ones are development instances. These tags can then be used while controlling access via an IAM Policy. For more information on tagging your resources, please refer to the link below. (http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html) Note: It can be done with the help of option B as well. However, the question is looking for the “best way to accomplish these using policies”. By using the option D, you can reduce usage of different IAM policies on each instance.

56. A company is hosting a MySQL database in AWS using the AWS RDS service.

To offload the reads, a Read Replica has been created and reports are run off the Read Replica database. But at certain times, the reports show stale data. Why may this be the case?

  1. The Read Replica has not been created properly.
  2. The backup of the original database has not been set properly.
  3. This is due to the replication lag.
  4. The Multi-AZ feature is not enabled

解析: An AWS Whitepaper on the caveat for Read Replicas is given below which must be taken into consideration by designers: Read Replicas are separate database instances that are replicated asynchronously. As a result, they are subject to replication lag and might be missing some of the latest transactions. Application designers need to consider which queries have tolerance to slightly stale data. Those queries can be executed on a Read Replica, while the rest should run on the primary node. Read Replicas can also not accept an y write queries. For more information on AWS Cloud best practices, please visit the following URL: (https://d1.awsstatic.com/whitepapers/AWS_Cloud_Best_Practices.pdf)

57. You have enabled CloudTrail logs for your company’s AWS account.

In addition, the IT Security department has mentioned that the logs need to be encrypted. How can this be achieved?

  1. Enable SSL certificates for the CloudTrail logs.
  2. There is no need to do anything since the logs will already be encrypted.
  3. Enable Server-Side Encryption for the trail.
  4. Enable Server-Side Encryption for the destination S3 bucket.

解析: 默认情况下,将使用 Amazon S3 服务器端加密 (SSE) 对 CloudTrail 事件日志文件进行加密。您还可以选择使用 AWS Key Management Service (AWS KMS) 密钥加密您的日志文件。您可以将日志文件在存储桶中存储任意长的时间。您也可以定义 Amazon S3 生命周期规则以自动存档或删除日志文件。如果您想接收有关日志文件传送和验证的通知,可以设置Amazon SNS 通知。 默认情况下,CloudTrail 提交到您存储桶的日志文件是用 Amazon S3 托管加密密钥 (SSE-S3) 通过 Amazon 服务器端加密进行加密的。要提供可直接管理的安全层,您可以对 CloudTrail 日志文件换用AWS KMS 托管密钥 (SSE-KMS) 服务器端加密

58. A company has set up their data layer in the Simple Storage Service.

There are a number of requests which include read/write and updates to objects in an S3 bucket. Users sometimes complain that updates to an object are not being reflected. Which of the following could be a reason for this?

  1. Versioning is not enabled for the bucket, so the newer version does notreflect the right data.
  2. Updates are being made to the same key for the object.
  3. Encryption is enabled for the bucket, hence it is taking time for the update to occur.
  4. The metadata for the S3 bucket is incorrectly configured.

59. A company has an application that uses the S3 bucket as its data layer.

As per the monitoring on the S3 bucket, it can be seen that the number of GET requests is 400 requests per second. The IT Operations team receives service requests about users getting HTTP 500 or 503 errors while accessing the application. What can be done to resolve these errors? Choose 2 answers from the options given below.

  1. Add a CloudFront distribution in front of the bucket.
  2. Add randomness to the key names.
  3. Add an ELB in front of the S3 bucket.
  4. Enable Versioning for the S3 bucket.

解析: AWS Documentation mentions the following: When your workload is sending mostly GET requests, you can add randomness to key names. In addition, you can integrate Amazon CloudFront with Amazon S3 to distribute content to your users with low latency and a high data transfer rate. Note: S3 can now scale to high request rates. Your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. However the AWS exam questions are not yet updated reflecting these changes in the questions. Hence the answer for this question is based on the initial er quest rate performance. For more information on S3 bucket performance, please visit the following URL: (https://docs.aws.amazon.com/AmazonS3/latest/dev/PerformanceOptimization.html)

60. A Solutions Architect is designing a solution to store and archive corporate documents and has determined that Amazon Glacier is the right solution.

Data has be retrieved within 3-5 hrs as directed by the management.

Which feature in Amazon Glacier can help meet this requirement and ensure cost effectiveness?

  1. Vault Lock
  2. Expedited retrieval
  3. Bulk retrieval
  4. Standard retrieval

解析: AWS Documentation mentions the following on Standard retrievals:

Standard retrievals are a low-cost way to access your data within just a few hours. For example, you can use Standard retrievals to restore backup data, retrieve archived media content for same-day editing or distribution, or pull and analyze logs to drive business decisions within hours.

For more information on Amazon Glacier retrievals, please visit the following URL:

https://aws.amazon.com/glacier/faqs/#dataretrievals

There are three options for retrieving data with varying access times and cost: Expedited, Standard, and Bulk retrievals.

  • Standard retrievals allow you to access any of your archives within several hours. Standard retrievals typically complete within 3 – 5 hours.
  • Bulk retrievals are S3 Glacier’s lowest-cost retrieval option, enabling you to retrieve large amounts, even petabytes, of data inexpensively in a day. Bulk retrievals typically complete within 5 – 12 hours.
  • Expedited retrievals allow you to quickly access your data when occasional urgent requests for a subset of archives are required. For all but the largest archives (250MB+), data accessed using Expedited retrievals are typically made available within 1 – 5 minutes. There are two types of Expedited retrievals: On-Demand and Provisioned. On-Demand requests are fulfilled when we are able to complete the retrieval within 1 – 5 minutes. Provisioned requests ensure that retrieval capacity for Expedited retrievals is available when you need them.

61. You want to ensure that you keep a check on the Active EBS Volumes, Active snapshots and Elastic IP addresses you use so that you don’t go beyond the service limit. Which of the below services can help in this regard?
Please select:

  1. A. AWS Cloudwatch
  2. B. AWS EC2
  3. C. AWS Trusted Advisor
  4. D. AWS SNS

Answer: C

SCS-C01 dumps exhibit

Explanation:
Below is a snapshot of the service limits that the Trusted Advisor can monitor C:\Users\wk\Desktop\mudassar\Untitled.jpg

Option A is invalid because even though you can monitor resources, it cannot be checked against the service limit.
Option B is invalid because this is the Elastic Compute cloud service Option D is invalid because it can be send notification but not check on service limit For more information on the Trusted Advisor monitoring, please visit the below URL:
https://aws.amazon.com/premiumsupport/ta-faqs> The correct answer is: AWS Trusted Advisor Submit your Feedback/Queries to our Experts

62. You need to ensure that data stored in S3 is encrypted but do not want to manage the encryption keys.

Which of the following encryption mechanisms can be used in this case?

  1. SSE-S3
  2. SSE-C
  3. SSE-KMS
  4. SSE-SSL

解析: SSE-S3 provides an integrated solution where Amazon handles key management and key protection using multiple layers of security. You should choose SSE-S3 if you prefer to have Amazon manage your keys.

Use SSE-C if you want to maintain your own encryption keys, but don’t want to implement or leverage a client-side encryption library.

AWS Documentation mentions the following on Encryption keys:

SSE-S3 requires that Amazon S3 manages the data and master encryption keys.
SSE-C requires that you manage the encryption keys.
SSE-KMS requires that AWS manages the data key but you manage the master key in AWS KMS.
For more information on using the Key Management service for S3, please visit the below URL:

https://docs.aws.amazon.com/kms/latest/developerguide/services-s3.html

63. IOT sensors monitor the number of bags that are handled at an airport.

The data gets sent back to a Kinesis stream with default settings. Every alternate day, the data from the stream is sent to S3 for processing. But it is noticed that S3 is not receiving all of the data that is being sent to the Kinesis stream. What could be the reason for this?

  1. The sensors probably stopped working on somedays, hence data is not sent to the stream.
  2. S3 can only store data for a day.
  3. Data records are only accessible for a default of 24 hours from the time they areadded to a stream.
  4. Kinesis streams are not meant to handle IoT related data.

解析: Kinesis Streams support changes to the data record retention period of your stream. A Kinesis stream is an ordered sequence of data records meant to be written to and read from in real-time. Data records are therefore stored in shards in your stream temporarily. The time period from when a record is added to when it is no longer accessible is called ther etention period. A Kinesis stream stores records from 24 hours by default,up to 168 hours. Option A, even though a possibility, cannot be taken for granted as the right option. Option B is invalid since S3 can store data indefinitely unless you have a lifecycle policy defined. Option D is invalid because the Kinesis service is perfect for this sort of data ingestion. For more information on Kinesis data retention, please refer to the below URL: (http://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html)

64. You have a requirement for deploying an existing Java based application to AWS.

There is a need for automatic scaling for the underlying environment. Which of the following can be used to deploy this environment in the quickest way possible?

  1. Deploy to an S3 bucket and enable web site hosting.
  2. Use the Elastic Beanstalk service to provision the environment.
  3. Use EC2 with Auto Scaling for the environment.
  4. Use AMIs to build EC2 instances for deployment.

解析: AWS Documentation mentions the following:
AWS Elastic Beanstalk is an easy-to-use service for deploying and scaling web applications and services developed with Java,.NET, PHP, Node.js, Python, Ruby, Go, andDockeron familiar servers such as Apache,
Nginx, Passenger, and IIS. You can simply upload your code and Elastic Beanstalk automatically handles the deployment, from capacity provisioning, load balancing, auto-scaling to application health monitoring. At the same time, you retain full control over the AWS resources powering your application and can access the underlying resources at any time.
For more information on the Elastic Beanstalk service, please visit the following URL:
https://aws.amazon.com/elasticbeanstalk/

65. You have been tasked with architecting an application in AWS.

The architecture would consist of EC2, the Classic Load Balancer, Auto Scaling and Route 53. There is a directive to ensure that Blue-Green deployments are possible in this architecture.

Which routing policy could you ideally use in Route 53 for achieving Blue-Green deployments?

  1. Simple
  2. Multi-answer
  3. Latency
  4. Weighted

解析: AWS Documentation mentions that Weighted routing policy is good for testing new versions of software. And that this is the ideal approach for Blue-Green deployments. Weighted routing lets you associate multiple resources with a single domain name (example.com) or subdomain name (acme.example.com) and choose how much traffic is routed to each resource. This can be useful for a variety of purposes, including load balancing and t esting new versions of software. For more information on Route 53 routing policies, please visit the following URL: (https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html) Note: Multivalue-answer is recommended to use only when you want to route traffic randomly to multiple resources,such as web servers, you can create one multivalue answer record for each resource and, optionally, associate an Amazon Route 53 health check with each record. However, in our case, we need to choose how much traffic is routed to each resource (blue and green). For example,Blue is currently live and we need to send less portion of traffic to Green, to check everything works fine. If yes, then we can decide to go with Green resources. If no, we can change the weight for that record to 0. Blue will be completely live again.

66. You are building a large-scale confidential documentation web server on AWS and all of its documentation will be stored on S3.

One of the requirements is that it should not be publicly accessible from S3 directly, and CloudFront would be needed to accomplish this.
Which of the methods listed below would satisfy the outlined requirements? Choose an answer from the options below.

  1. Create an Identity and Access Management (IAM) user for CloudFront and grant access to the objects in your S3 bucket to that IAM User.
  2. Create an Origin Access Identity (OAI) for CloudFront and grant access to the objects in your S3 bucket to that OAI.
  3. Create individual policies for each bucket the documents are stored in, and grant access only to CloudFront in these policies.
  4. Create an S3 bucket policy that lists the CloudFront distribution ID as the Principal and the at rget bucket as the Amazon Resource Name (ARN).
Posted in AWS

发表评论

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据