1. A company hosts a popular web application that connects to an Amazon RDS MySQL DB instance running in a private VPC subnet created with default ACL settings.

The IT Security department has identified a DoS attack from a suspecting IP. How can you protect the subnets from this attack?

  1. Change the Inbound Security Groups to deny access from the suspecting IP.
  2. Change the Outbound Security Groups to deny access from the suspecting IP
  3. Change the Inbound NACL to deny access from the suspecting IP
  4. Change the Outbound NACL to deny access from the suspecting IP

2. A company is planning on allowing their users to upload and read objects from an S3 bucket.

Due to the numerous amount of users, the read/write traffic will be very high.

How should the architect maximize Amazon S3 performance?

  1. Prefix each object name with a random string.
  2. Use the STANDARD_IA storage class.
  3. Prefix each object name with the curer nt data.
  4. Enable versioning on the S3 bucket.

解析: If the request rate is high, you can use hash keys or random strings to prefix to the object name. Here,partitions used to store the objects will be better distributed and hence allow for better read/write performance for your objects. For more information on how to ensure performance in S3, please visit the following URL: (https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html)

3. A concern raised in your company is that developers could potentially delete production-based EC2 resources.

As a Cloud Admin, which of the below options would you choose to help alleviate this concern? Choose 2 options.

  1. Tag the production instances with production-identifying tag and add resource-level permissions to the developers with an explicit deny on the terminate API call to instances with the production tag.
  2. Create a separate AWS account and add the developers to that account.
  3. Modify the IAM policy on the developers to require MFA before deleting EC2 instances, and disable MFA access to the employee.
  4. Modify the IAM policy on the developers to require MFA before deleting EC2 instances.

4. You are developing a mobile application that needs to issue temporary security credentials to users.

This is essential due to security concerns. Which of the below services can help achieve this?

  1. AWS STS
  2. AWS Config
  3. AWS Trusted Advisor
  4. AWS Inspector

解析:

AWS Documentation mentions the following:

You can use the AWS Security Token Service (AWS STS) to create and provide trusted users with temporary security credentials that can control access to your AWS resources. Temporary security credentials are short-term, as the name implies. They can be configured to last for anywhere from a few minutes to several hours. After the credentials expire, AWS no longer recognizes them or allows any kind of access from API requests made with them.

For more information on the Secure Token Service, please visit the following URL:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp.html

5. Your company is planning on using the API Gateway service to manage APIs for developers and users.

There is a need to segregate the access rights for both developers and users. How can this be accomplished?

  1. Use IAM permissions to control the access.
  2. Use AWS Access keys to manage the access.
  3. Use AWS KMS service to manage the access.
  4. Use AWS Config Service to control the access.

解析:

AWS Documentation mentions the following:

You control access to Amazon API Gateway with IAM permissions by controlling access to the following two API Gateway component processes:

To create, deploy, and manage an API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway.
To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway.
For more information on permissions for the API gateway, please visit the URL:

https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html

6. You have an S3 bucket hosted in AWS which is used to store promotional videos you upload.

You need to provide access to users for a limited duration of time. How can this be achieved?

  1. Use versioning and enable a timestamp for each version.
  2. Use Pre-Signed URLs.
  3. Use IAM Roles with a timestamp to limit the access.
  4. Use IAM policies with a timestamp to limit the access.

解析:

AWS Documentation mentions the following:

You control access to Amazon API Gateway with IAM permissions by controlling access to the following two API Gateway component processes:

To create, deploy, and manage an API in API Gateway, you must grant the API developer permissions to perform the required actions supported by the API management component of API Gateway.
To call a deployed API or to refresh the API caching, you must grant the API caller permissions to perform required IAM actions supported by the API execution component of API Gateway.
For more information on permissions for the API gateway, please visit the URL:

https://docs.aws.amazon.com/apigateway/latest/developerguide/permissions.html

7. Your company has recently started using AWS services for their daily operations.

As a cloud administrator, which of the following services would you recommend using to have an insight on securing the infrastructure and for cost optimization?

  1. AWS Inspector
  2. AWS Trusted Advisor
  3. AWS WAF
  4. AWS Config

解析:AWS Documentation mentions the following on Trusted Advisor:
An online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment, Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices.
For more information on the Trusted Advisor, please visit the below URL:
https://aws.amazon.com/premiumsupport/trustedadvisor/

8. Your IT Security department has mandated that all traffic flowing in and out of EC2 instances needs to be monitored.

Which of the below services can help achieve this?

  1. Trusted Advisor
  2. VPC Flow Logs
  3. Use CloudWatch metrics
  4. Use CloudTrail

解析: AWS Documentation mentions the following:
VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.
For more information on VPC Flow Logs, please visit the following URL:
https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html

9. A company is currently utilising Redshift cluster as their production warehouse.

As a cloud architect, you are tasked to ensure that the disaster recovery is in place. Which of the following options is best in addressing this issue?

  • Take a copy of the underlying EBS volumes to S3 and then do Cross-Region Replication.
  • Enable Cross-Region Snapshots for the Redshift Cluster.
  • Create a CloudFormation template to restore the Cluster in another region.
  • Enable Cross Availability Zone Snapshots for the Redshift Cluster.

解析: The below diagram shows that snapshots are available for Redshift clusters enabling them to be available in different regions: For more information on managing Redshift Snapshots, please visit theo fllowing URL: (https://docs.aws.amazon.com/redshift/latest/mgmt/managing-snapshots-console.html)

10. Your organization is building a collaboration platform for which they chose AWS EC2 for web and application servers and MySQL RDS instance as the database.

Due to the nature of the traffic to the application, they would like to increase the number of connections to RDS instance. How can this be achieved?

  1. Login to RDS instance and modify database config file under /etc/mysql/my.cnf
  2. Create a new parameter group, attach it to DB instance and change the setting .
  3. Create a new option group, attach it to DB instance and change the setting.
  4. Modify setting in default options group attached to DB instance.

解析: You manage your DB engine configuration through the use of parameters in a DB parameter group . DB parameter groups act as a container for engine configuration values that are applied to one or more DB instances.A default DB parameter group is created if you create a DB instance without specifying a customer-created DB parameter group. Each default DB parameter group contains database engine defaults and Amazon RDS system defaults based on the engine, compute class, and allocated storage of the instance. You cannot modify the parameter settings of a default DB parameter group; you must create your own DB parameter group to change parameter settings from their default value. Note that not all DB engine parameters can be changed in a customer-created DB parameter group.If you want to use your own DB parameter group, you simply create a new DB parameter group, modify the desired parameters, and modify your DB instance to use the new DB parameter group. All DB instances that are associated with a particular DB parameter group get all parameter updates to that DB parameter group.For more information;https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html

11. You have configured an Auto-scaling group for which the minimum running instance is 2 and maximum running instance is 10.

For the past 30 minutes, all five instances have been running at 100 CPU Utilization; however, the Auto Scaling group has not added any more instances to the group. What is the most likely cause for this? Choose 2 answers from the options given below.

  1. You already have 20 on-demand instances running.
  2. The Auto Scaling group’s MAX size is set at five.
  3. The Auto Scaling group’s scale down policy is too high.
  4. The Auto Scaling group’s scale up policy has not yet been reached.

解析: By default, you can run up to 20 On-Demand EC2 instances. If you need more, you have to complete a requisition form and submit it to AWS. However in the question, we have already mentioned that MAX is set to 10. In that case option B is invalid and hence cannot be marked as an answer. But the question does not mention that the metric chosen for this Auto Scaling policy is CPUUtilization Metric. It could be DiskWrites or Network In/Out metric. Assuming the current set up is to do with a metric other than CPUUtilization we can choose option D as a right choice. In this scenario, we are only discussing about the non-functioning Scaling up process and not about the Scaling down scenario. This is explained in the AWS documentation: Depending on the instance types, some instance types only support up to 5 on-demand instances. However the maximum for most of the instance types are 20 on-demand instances. So based on that, Option A is correct. instance(s) are already running. Launching EC2 instance failed. Cause: The Auto Scaling group has reached the limit set by the DesiredCapacity parameter. Solution: Update your Auto Scaling group by providing a new value for the –desired-capacity parameter using the update-auto-scaling-group command. If you’ve reached your limit for the number of EC2 instances, you can request an increase. For more information, see AWS Service Limits. For more information on troubleshooting Auto Scaling, please refer to the following link: (http://docs.aws.amazon.com/autoscaling/latest/userguide/ts-as-capacity.html) The link below provides information on EC2 instance limits: (https://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_ec2) More information on limits of on-demand EC2 instances is available at: (https://aws.amazon.com/ec2/faqs/#How_many_instances_can_I_run_in_Amazon_EC2)

12. A company has an application hosted in AWS.

This application consists of EC2 Instances that sit behind an ELB. The following are requirements from an administrative perspective:

a) Must be able to collect and analyse logs with regard to ELB’s performance.
b) Ensure that notifications are sent when the latency goes beyond 10 seconds.

Which of the following can be used to achieve this requirement? Choose 2 answers from the options given below.

  1. Use CloudWatch for monitoring.
  2. Enable CloudWatch logs and then investigate the logs whenever there is an issue.
  3. Enable the logs on the ELB with Latency Alarm that sends an email and then investigate the logs whenever there is an issue.
  4. Use CloudTrail to monitor whatever metrics need to be monitored.

13. An IT company has a set of EC2 Instances hosted in a VPC.

They are hosted in a private subnet. These instances now need to access resources stored in an S3 bucket. The traffic should not traverse the internet. The addition of which of the following would help fulfill this requirement?

  1. VPC Endpoint
  2. NAT Instance
  3. NAT Gateway
  4. Internet Gateway

解析: A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. For more information on AWS VPC endpoints, please visit the following URL: (https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-endpoints.html)

14. Your team has developed an application and now needs to deploy that application onto an EC2 Instance.

This application interacts with a DynamoDB table. Which of the following is the correct and MOST SECURE way to ensure that the application interacts with the DynamoDB table.

  1. Create a role which has the necessary permissions and can be assumed yb the EC2 instance
  2. Use the API credentials from an EC2 instance. Ensure the environment variables areupdated with the API access keys.
  3. Use the API credentials from a bastion host. Make the application on the EC2 Instance send requests via the bastion host.
  4. Use the API credentials from a NAT Instance. Make the application on the EC2 Instance send requests via the NAT Instance

15. Your development team has created a web application that needs to be tested on VPC.

You need to advise the IT admin team on how they should implement the VPC to ensure the application can be accessed from the Internet. Which of the following components would be part of the design. Choose 3 answers from the options given below

  1. An Internet gateway attached to the VPC.
  2. A NAT gateway attached to the VPC.
  3. Route table entry added for the Internet gateway
  4. All instances launched with a public IP

16. A company is planning on migrating their infrastructure to AWS.

For the data stores , the company does not want to manage the underlying infrastructure. Which of the following would be ideal for this scenario? Choose 2 answers from the options give below

  1. AWS S3
  2. AWS EBS Volumes
  3. AWS DynamoDB
  4. AWS EC2

解析:

AWS S3 is object level storage that is completely managed by AWS.

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database, so that you don’t have to worry about hardware provisioning, setup and configuration, replication, software patching, or cluster scaling.

Option B is incorrect since you need to manage EBS volumes

Option D is incorrect since this is a compute service

For more information on DynamoDB, please refer to the below link

https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html
For more information on Simple Storage Service, please refer to the below link

https://aws.amazon.com/s3/

17. Your company has a set of VPC’s.

There is now a requirement to establish communication across the Instances in the VPC’s. Your supervisor has asked you to implement the VPC peering connection. Which of the following considerations would you keep in mind for VPC peering. Choose 2 answers from the options below

  1. Ensuring that the VPC’s don’t have overlapping CIDR blocks
  2. Ensuring that no on-premises communication is required via transitive routing
  3. Ensuring that the VPC’s only have public subnets for communication
  4. Ensuring that the VPC’s are created in the same region

解析: Answer – A and B The AWS Documentation mentions the following with restrictions for VPC peering Overlapping CIDR Blocks You cannot create a VPC peering connection between VPCs with matching or overlapping IPv4 CIDR blocks. Example: Edge to Edge Routing Through a VPN Connection or an AWS Direct Connect Connection You have a VPC peering connection between VPC A and VPC Bp (cx-aaaabbbb). VPC A also has a VPN connection or an AWS Direct Connect connection to a corporate network. Edge to edge routing is not supported; you cannot use VPC A to extend the peering relationship to exist between VPC B and the corporate network. For example, traffic from the corporate network can’t directly access VPC B by using the VPN connection or the AWS Direct Connect connection to VPC A. Option C is incorrect since it is not necessary that the VPC’s only contain public subnets Option D is incorrect since it is not necessary that the VPC’s are created in the same region For more information on Invalid peering configurations, please refer to the below link (https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-configurations.html) Note: AWS now supports VPC Peering across different regions. Please check below AWS Docs for more details: (https://aws.amazon.com/about-aws/whats-new/2017/11/announcing-support-for-inter-regionvpc-peering/) (https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html)

18. You have been instructed to establish a successful site-to-site VPN connection from your on-premises network to the VPC (Virtual Private Cloud).

As an architect , which of the following pre-requisites should you ensure are in place for establishing the site-tosite VPN connection. Choose 2 answers from the options given below

  1. The main route table to route traffic through a NAT instance
  2. A public IP address on the customer gateway for the on-premises network
  3. A virtual private gateway attached to the VPC
  4. A virtual private gateway attached to the VPC

19. Your company has a set of EBS volumes and a set of adjoining EBS snapshots.

They want to minimize the costs for the underlying EBS snapshots. Which of the following approaches provides the lowest cost for Amazon Elastic Block Store snapshots while giving you the ability to fully restore data?

  1. Maintain two snapshots: the original snapshot and the latest incremental snapshot.
  2. Maintain a volume snapshot; subsequent snapshots will overwrite one another
  3. Maintain a single snapshot: the latest snapshot is both Incremental and complete.
  4. Maintain the most current snapshot, archive the original and incremental to Amazon Glacier.

20. You are using an m1.small EC2 Instance with one 300GB EBS General purpose SSD volume to host a relational database.

You determined that write throughput to the database needs to be increased. Which of the following approaches can help achieve this? Choose 2 answers from the options given below

  1. Use a larger EC2 Instance
  2. Enable Multi-AZ feature for the database.
  3. Consider using Provisioned IOPS Volumes.
  4. Put the database behind an Elastic Load Balancer.

21. Your company has a set of AWS RDS Instances.

Your management has asked you to disable Automated backups to save on cost. When you disable automated backups for AWS RDS, what are you compromising on?

  1. Nothing,you are actually saving resources on aws
  2. You are disabling the point-in-time recovery.
  3. Nothing really, you can still take manual backups.
  4. You cannot disable automated backups in RDS.

22. A company has a workflow that sends video files from their on-premises system to AWS for transcoding.

They use EC2 worker instances that pull transcoding jobs from SQS. As an architect you need to design how the SQS service would be used in this architecture.

Which of the following is the ideal way in which the SQS service should be used?

  1. SQS should be used to guarantee the order of the messages.
  2. SQS should be used to synchronously manage the transcoding output
  3. SQS should be used to check the health of the worker instances.
  4. SQS should be used to facilitate horizontal scaling of encoding tasks.

解析:

A. SQS guarantees the order of the messages.

Not true, SQS does not guarantee the order of the messages at all. If your app requires messages be processed in a certain order, make sure your messages in the SQS queue have a sequence number on them.

B. SQS synchronously provides transcoding output.

Transcoding output would mean a piece of media (eg audio/video) that needs to be stored somewhere. Since media files are usually large binary data, this would probably be into S3 (and possibly metadata about the media file into DynamoDB, such as the S3 location, user/job that generated it, date/time it was transcoded, etc.) While S3 messages can accept binary data as a data type, you probably wouldn’t want to store a output media file as an SQS message because the maximum message size is 256KB, which would severely limit how large your transcoding output file could be. Also, the maximum retention time in an SQS queue is 14 days. In the unlikely case that you were willing to accept those limitations, you’d still be limited to a maximum of 120,000 messages in the queue, which would severely limit the amount of transcoding outputs you could store across those 14 days. This scenario just isn’t a good fit for an SQS queue. Drop your transcoding output files into S3, instead.

C. SQS checks the health of the worker instances.

SQS does not check the health of anything. If you’ve got a fleet of worker instances you want to monitor the health of, probably you’d want to have them in an auto-scaling group with a health check on the ASG to replace failed worker instances.

D. SQS helps to facilitate horizontal scaling of encoding tasks.

Yes, this is a great scenario for SQS. “Horizontal scaling” means you have multiple instances involved in the workload (encoding tasks in this case). You can drop messages indicating an encoding job needs to be performed into an SQS queue, immediately making the job notification message accessible to any number of encoding worker instances.

23. You have been hired as a consultant for a company to implement their CI/CD processes.

They have a keen eye to implement the deployment of their infrastructure as code. You need to advise them on what they can use on AWS to fulfil this requirement. Which of the following service would you recommend?

  1. Amazon Simple Workflow Service
  2. AWS Elastic Beanstalk
  3. AWS CloudFormation
  4. AWS OpsWorks

24. You are designing the application architecture for a company.

The architecture is going to consist of a web tier that will be hosted on EC2 Instances placed behind an Elastic Load Balancer. Which of the following would be considered important when considering what should the specification for the components of the application architecture?

Select 2 options:

  1. Determine the required I/O operations
  2. Determining the minimum memory requirements for an application
  3. Determining where the client intends to serve most of the traffic
  4. Determining the peak expected usage for a client’s application

25. For which of the following workloads should a Solutions Architect consider using Elastic Beanstalk?

Choose 2 answers from the options given below.

  1. A Web application using Amazon RDS
  2. An Enterprise Data Warehouse
  3. A long running worker process
  4. A static website
  5. A management task run once on nightly basis

解析: AWS Documentation clearly mentions that the Elastic Beanstalk component can be used to create Web Server environments and Worker environments. This following diagram illustrates an example Elastic Beanstalk architecture for a web server environment tier and shows how the components in that type of environment tier work together. The remainder of this section discusses all the components in more detail. For more information on AWS Elastic beanstalk Web server environments, please visit the following URL: (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-webserver.html) For more information on AWS Elastic beanstalk Worker environments, please visit the following URL: (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-worker.html)

26. An application with a 150 GB relational database runs on an EC2 Instance.

While the application is used infrequently with small peaks in the morning and evening, what is the MOST cost effective storage type among the options below?

  1. Amazon EBS provisioned IOPS SSD
  2. Amazon EBS Throughput Optimized HDD
  3. Amazon EBS General Purpose SSD
  4. Amazon EFS

解析:

Since the database is used infrequently and not throughout the day, and the question mentions the MOST cost effective storage type, the preferred choice would be EBS General Purpose SSD over EBS provisioned IOPS SSD.

The minimum volume of Throughput Optimized HDD is 500 GB. As per our scenario, we need 150 GB only. Hence, option C: Amazon EBS General Purpose SSD, would be the best choice for cost-effective.

For more information on AWS EBS Volumes, please visit the following URL:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSVolumes.html

Note:

SSD-backed volumes are optimized for transactional workloads involving frequent read/write operations with small I/O size, where the dominant performance attribute is IOPS. The question is focusing on a relational DB where we will give importance to Input/output operations per second. Hence gp2 seems to be a good option in this case. Since the question does not mention on any mission-critical low-latency requirement PIOPS is not required.

HDD-backed volumes are optimized for large streaming workloads where throughput (measured in MiB/s) is a better performance measure than IOPS.

27. Data processing application in AWS must pull data from an Internet service.

A Solutions Architect is to design a highly available solution to access this data without placing bandwidth constraints on the application traffic. Which solution meets these requirements?

  1. Launch a NAT gateway and add routes for 0.0.0.0/0
  2. Attach a VPC endpoint and add routes for 0.0.0.0/0
  3. Attach an Internet gateway and add routes for 0.0.0.0/0
  4. Deploy NAT instances in a public subnet and add routes for 0.0.0.0/0

解析:

The AWS Documentation mentions the following:

An Internet gateway is a horizontally scaled, redundant, and highly available VPC component that allows communication between instances in your VPC and the Internet. It therefore imposes no availability risks or bandwidth constraints on your network traffic.

For more information on the Internet gateway, please visit the following URL:

https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html

Note: NAT gateway is also a highly available architecture and is used to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.
It can only scale up to 45 Gbps.
NAT instances’s bandwidth capability depends up on the instance type.

VPC Endpoints are used to enable private connectivity to services hosted in AWS, from within your VPC without using an Internet Gateway, VPN, Network Address Translation (NAT) devices, or firewall proxies. So it cannot be used to connect to internet.

An Internet gateway is horizontally-scaled, redundant, and highly available. It imposes no bandwidth constraints.

28. While reviewing the Auto Scaling events for your application, you notice that your application is scaling up and down multiple times in the same hour.

What design choice could you make to optimize costs while preserving elasticity? Choose 2 answers from the options given below.

  1. Modify the Auto Scaling group termination policy to terminate the older instance first.
  2. Modify the Auto Scaling group termination policy to terminate the newest instance first.
  3. Modify the Auto Scaling group cool down timers .
  4. Modify the Auto Scaling group to use Scheduled Scaling actions.
  5. Modify the CloudWatch alarm period that triggers your Auto Scaling scale down policy

解析:

Here, not enough time is being given for the scaling activity to take effect and for the entire infrastructure to stabilize after the scaling activity. This can be taken care of by increasing the Auto Scaling group CoolDown timers.

For more information on Auto Scaling CoolDown, please visit the following URL:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/Cooldown.html
You will also have to define the right threshold for the CloudWatch alarm for triggering the scale down policy.

For more information on Auto Scaling Dynamic Scaling, please visit the following URL:

https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html

29. A retailer exports data daily from its transactional databases into an S3 bucket in the Sydney region.

The retailer’s Data Warehousing team wants to import this data into an existing Amazon Redshift cluster in their VPC at Sydney. Corporate security policy mandates that data can only be transported within a VPC. What combination of the following steps will satisfy the security policy?

Choose 2 answers from the options given below.

  1. Enable Amazon Redshift Enhanced VPC Routing.
  2. Create a Cluster Security Group to allow the Amazon Redshift cluster to access Amazon S3.
  3. Create a NAT gateway in a public subnet to allow the Amazon Redshift cluster to access Amazon S3.
  4. Create and configure an Amazon S3 VPC endpoint.

解析: Amazon Redshift Enhanced VPC Routing provides VPC resources, the access to Redshift. Redshift will not be able to access the S3 VPC endpoints without enabling Enhanced VPC routing, so one option is not going to support the scenario if another is not selected. NAT instance (the proposed answer) cannot be reached by Redshift without enabling Enhanced VPC Routing. (https://aws.amazon.com/about-aws/whats-new/2016/09/amazon-redshift-now-supports-enhancedvpc-routing/)

30. An organization hosts a multi-language website on AWS, which is served using CloudFront.

Language is specified in the HTTP request as shown below:

http://d11111f8.cloudfront.net/main.html?language=de
http://d11111f8.cloudfront.net/main.html?language=en
http://d11111f8.cloudfront.net/main.html?language=es

How should AWS CloudFront be configured to delivered cache data in the correct language?

  1. Forward cookies to the origin
  2. Based on query string parameters
  3. Cache objects at the origin
  4. Serve dynamic content

解析:Since language is specified in the query string parameters, CloudFront should be configured for the same.

For more information on configuring CloudFront via query string parameters, please visit the following URL:

https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html

31. What options can be used to host an application that uses NGINX and is scalable at any point in time?

Choose 2 correct answers.

  1. AWS EC2
  2. AWS Elastic Beanstalk
  3. AWS SQS
  4. AWS ELB

解析: NGINX is an open source software for web serving, reverse proxying, caching, load balancing etc. It complements the load balancing capabilities of Amazon ELB and ALB by adding support for multiple HTTP, HTTP/2, and SSL/TLS services, content-based routing rules, caching, Auto Scaling support, and traffic management policies.
NGINX can be hosted on an EC2 instance through a series of clear steps- Launch an EC2 instance through the console. SSH into the instance and use the command yum install -y nginx to install nginx. Also, make sure that it is configured to restart automatically after a reboot. It can also be installed with an Elastic Beanstalk service. To enable the NGINX proxy server with your Tomcat application, you must add a configuration file to .ebextensions in the application source bundle that you upload to Elastic Beanstalk. More information is available at:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform.html#java-tomcat-proxy
The below snippet from the AWS Documentation shows the server available for Web server environments that can be created via Elastic Beanstalk. The server shows that NGINX servers can be provisioned via the Elastic Beanstalk
service.
For more information on the supported platforms for AWS Elastic Beanstalk, please visit the following URL:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html
NGINX is available as AMI for EC2.
The correct answers are: AWS EC2, AWS Elastic Beanstalk.

32. A million images are required to be uploaded to S3.

What option ensures optimal performance in this case?

  1. Use a sequential ID for the prefix.
  2. Use a hexadecimal hash for the prefix.
  3. Use a hexadecimal hash for the suffix.
  4. Use a sequential ID for the suffix.

解析: This recommendation for increasing performance in case of a high request rate in S3 is given in the AWS documentation. For more information on S3 performance considerations, please visit the following URL: (https://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html) Note: Amazon S3 maintains an index of object key names in each AWS Region. Object keys are stored in UTF-8 binary ordering across multiple partitions in the index. The key name determines which partition the key is stored in. Using a sequential prefix, such as a timestamp or an alphabetical sequence,i ncreases the likelihood that Amazon S3 will target a specific partition for a large number of your keys, which can overwhelm the I/O capacity of the partition. If your workload is a mix of request types, introduce some randomness to key names by adding a hash string as a prefix to the key name. By introducing randomness to your key names, the I/O load is distributed across multiple index partitions. For example, you can compute an MD5 hash of the character sequence that you plan to assign as the key, and add three or four characters from the hash as a prefix to the key name.

33. A database is required for a Two-Tier application.

The data would go through multiple schema changes. The database needs to be durable, ACID compliant and changes to the database should not result in database downtime. Which of the following is the best option for data storage?

  1. AWS S3
  2. AWS Redshift
  3. AWS DynamoDB
  4. AWS Aurora

解析:As per the AWS documentation Aurora does support Schema changes.

Amazon Aurora is a MySQL-compatible database that combines the speed and availability of high-end commercial databases with the simplicity and cost- effectiveness of open-source databases. Amazon Aurora has taken a common data definition language (DDL) statement that typically requires hours to complete in MySQL and made it near-instantaneous.i.e.0.15 sec for a 100BG table on r3.8xlarge instance.

Note: Amazon DynamoDB is schema-less, in that the data items in a table need not have the same attributes or even the same number of attributes.
Hence it is not a solution.

In Aurora, when a user issues a DDL statement: The database updates the INFORMATION_SCHEMA system table with the new schema. In addition, the database timestamps the operation, records the old schema into a new system table (Schema Version Table), and propagates this change to read replicas.

For more information, please check below AWS Docs:

https://aws.amazon.com/blogs/database/amazon-aurora-under-the-hood-fast-ddl/

34. A company is using a Redshift cluster to store their data warehouse.

There is a requirement from the Internal IT Security team to encrypt data for the Redshift database. How can this be achieved?

  1. Encrypt the EBS volumes of the underlying EC2 Instances.
  2. Use AWS KMS Customer Default master key.
  3. Use SSL/TLS for encrypting the data.
  4. Use S3 Encryption.

解析:

AWS documentation mentions the following:

Amazon Redshift uses a hierarchy of encryption keys to encrypt the database. You can use either AWS Key Management Service (AWS KMS) or a hardware security module (HSM) to manage the top-level encryption keys in this hierarchy. The process that Amazon Redshift uses for encryption differs depending on how you manage keys.

For more information on Redshift encryption, please visit the following URL:

https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html

35. A company owns an API which currently gets 1000 requests per second.

The company wants to host this in a cost effective manner using AWS. Which one of the following solution is best suited for this?

  1. Use API Gateway with the backend services as it is.
  2. Use the API Gateway along with AWS Lambda
  3. Use CloudFront along with the API backend service as it is.
  4. Use ElastiCache along with the API backend service as it is.

解析:

Since the company has full ownership of the API, the best solution would be to convert the code for the API and use it in a Lambda function. This can help save on cost, since in the case of Lambda, you only pay for the time the function runs, and not for the infrastructure.

Then, you can use the API Gateway along with the AWS Lambda function to scale accordingly.

For more information on using API Gateway with AWS Lambda, please visit the following URL:

https://docs.aws.amazon.com/apigateway/latest/developerguide/getting-started-with-lambda-integration.html

Note: With Lambda you do not have to provision your own instances; Lambda performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying your code, running a web service front end, and monitoring and logging your code. AWS Lambda provides easy scaling and high availability to your code without additional effort on your part.

36. Currently a company makes use of EBS snapshots to back up their EBS Volumes.

As a part of the business continuity requirement, these snapshots need to be made available in another region. How can this be achieved?

  1. Directly create the snapshot in the other region.
  2. Create Snapshot and copy the snapshot to a new region.
  3. Copy the snapshot to an S3 bucket and then enable Cross-Region Replication for the bucket.
  4. Copy the EBS Snapshot to an EC2 instance in another region.

解析:A snapshot is constrained to the region where it was created. After you create a snapshot of an EBS volume, you can use it to create new volumes in the same region. For more information, follow the link on Restoring an Amazon EBS Volume from a Snapshot below. You can also copy snapshots across regions, making it possible to use multiple regions for geographical expansion, data center migration, and disaster recovery.

For more information on EBS Snapshots, please visit the following URL:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html

For more information on Restoring an Amazon EBS Volume from a Snapshot, please visit the following URL:

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-restoring-volume.html

Option C is incorrect. Because, the snapshots which we are taking from the EBS are stored in AWS managed S3. We don’t have the option to see the snapshot in S3. Hence, option C can’t be the correct answer.

37. Company has an application hosted in AWS.

This application consists of EC2 Instances which sit behind an ELB. The following are requirements from an administrative perspective:

a) Ensure notifications are sent when the read requests go beyond 1000 requests per minute
b) Ensure notifications are sent when the latency goes beyond 10 seconds
c) Any API activity which calls for sensitive data should be monitored

Which of the following can be used to satisfy these requirements? Choose 2 answers from the options given below.

  1. Use CloudTrail to monitor the API Activity
  2. Use CloudWatch logs to monitor the API Activity
  3. Use CloudWatch metrics for the metrics that need to be monitored as per the requirement and set up an alarm activity to send out notifications when the metric reaches the set threshold limit.
  4. Use custom log software to monitor the latency and read requests to the ELB.

解析:

AWS CloudTrail can be used to monitor the API calls.

For more information on CloudTrail, please visit the following URL:

https://aws.amazon.com/cloudtrail/
When you use CloudWatch metrics for an ELB, you can get the amount of read requests and latency out of the box.

For more information on using Cloudwatch with the ELB, please visit the following URL:

https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-cloudwatch-metrics.html

Option A is correct. CloudTrail is a web service that records AWS API calls for your AWS account and delivers log files to an Amazon S3 bucket. The recorded information includes the identity of the user, the start time of the AWS API call, the source IP address, the request parameters, and the response elements returned by the service.

https://docs.aws.amazon.com/awscloudtrail/latest/APIReference/Welcome.html

Option C is correct. Use Cloudwatch metrics for the metrics that needs to be monitored as per the requirement and set up an alarm activity to send out notificatons when the metric reaches the set threshold limit.

38. There is an application which consists of EC2 Instances behind a classic ELB.

An EC2 proxy is used for content management to backend instances. The application might not be able to scale properly.
Which of the following can be used to scale the proxy and backend instances appropriately?

Choose 2 answers from the options given below.

  1. Use Auto Scaling for the proxy servers.
  2. Use Auto Scaling for the backend instances.
  3. Replace the Classic ELB with Application ELB.
  4. Use Application ELB for both the front end and backend instances.

解析:

When you see a requirement for scaling, consider the Auto Scaling service provided by AWS. This can be used to scale both proxy servers and backend instances.

For more information on Auto Scaling, please visit the following URL:

https://docs.aws.amazon.com/autoscaling/plans/userguide/what-is-aws-auto-scaling.html

39. An application hosted in AWS allows users to upload videos to an S3 bucket.

A user is required to be given access to upload some videos for a week based on the profile. How can be this be accomplished in the best way possible?

  1. Create an IAM bucket policy to provide access for a week’s duration.
  2. Create a pre-signed URL for each profile which will last for a week’s duration.
  3. Create an S3 bucket policy to provide access for a week’s duration.
  4. Create an IAM role to provide access for a week’s duration.

解析: Pre-signed URL’s are the perfect solution when you want to give temporary access to users for S3 buckets. So, whenever a new profile is created, you can create a pre-signed URL to ensure that the URL lasts for a week and allows users to upload the required objects. For more information on pre-signed URL’s, please visit the following URL: (https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html)

40. A company has a requirement for archival of 6TB of data.

There is an agreement with the stakeholders for an 8-hour agreed retrieval time. Which of the following can be used as the MOST cost-effective storage option?

  1. AWS S3 Standard
  2. AWS S3 Infrequent Access
  3. AWS Glacier
  4. AWS EBS Volumes

解析:

Amazon Glacier is the perfect solution for this. Since the agreed time frame for retrieval is met at 8 hours, this will be the most cost effective option.

For more information on AWS Glacier, please visit the following URL:

https://aws.amazon.com/documentation/glacier/
Posted in AWS

发表评论

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据