共65题,130分钟。

1. You are the new IT architect in a company that operates a mobile sleep tracking application.

When activated at night, the mobile app is sending collected data points of 1 kilobyte every 5 minutes to your backend. The backend takes care of authenticating the user and writing the data points into an Amazon DynamoDB table.

Every morning, you scan the table to extract and aggregate last night’s data on a per user basis, and store the results in Amazon S3.

Users are notified via Amazon SMS mobile push notifications that new data is available, which is parsed and visualized by (The mobile app Currently you have around 100k users who are mostly based out of North America.

You have been tasked to optimize the architecture of the backend system to lower cost what would you recommend? (Choose 2 answers)

  1. First option Create a new Amazon DynamoDB (able each day and drop the one for the previous day after its data is on Amazon S3.
  2. Second optioHave the mobile app access Amazon DynamoDB directly instead of JSON files stored on Amazon S3.
  3. Third optionIntroduce an Amazon SQS queue to buffer writes to the Amazon DynamoDB table and reduce provisioned write throughput.
  4. New OptionIntroduce Amazon Elasticache lo cache reads from the Amazon DynamoDB table and reduce provisioned read throughput.
  5. New OptionWrite data directly into an Amazon Redshift cluster replacing both Amazon DynamoDB and Amazon S3.

2. You have recently joined a startup company building sensors to measure street noise and air quality in urban areas.

The company has been running a pilot deployment of around 100 sensors for 3 months each sensor uploads 1KB of sensor data every minute to a backend hosted on AWS.

During the pilot, you measured a peak or 10 IOPS on the database, and you stored an average of 3GB of sensor data per month in the database.

The current deployment consists of a load-balanced auto scaled Ingestion layer using EC2
instances and a PostgreSQL RDS database with 500GB standard storage.

The pilot is considered a success and your CEO has managed to get the attention or some potential investors. The business plan requires a deployment of at least 100K sensors which needs to be supported by the backend. You also need to store sensor data for at least two years to be able to compare year over year Improvements.

To secure funding, you have to make sure that the platform meets these requirements and leaves room for further scaling. Which setup win meet the requirements?

  1. Add an SQS queue to the ingestion layer to buffer writes to the RDS instance
  2. Ingest data into a DynamoDB table and move old data to a Redshift cluster
  3. Replace the RDS instance with a 6 node Redshift cluster with 96TB of storage
  4. Keep the current architecture but upgrade RDS storage to 3TB and 10K provisioned IOPS

3. Your website is serving on-demand training videos to your workforce.

Videos are uploaded monthly in high resolution MP4 format. Your workforce is distributed globally often on the move and using company-provided tablets that require the HTTP Live Streaming (HLS) protocol to watch a video. Your company has no video transcoding expertise and it required you may need to pay for a consultant.

How do you implement the most cost-efficient architecture without compromising high availability and quality of video delivery’?

  1. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS S3 to host videos with Lifecycle Management to archive original flies to Glacier after a few days. CloudFront to serve HLS transcoded videos from S3
  2. A video transcoding pipeline running on EC2 using SQS to distribute tasks and Auto Scaling to adjust the number or nodes depending on the length of the queue S3 to host videos with Lifecycle Management to archive all files to Glacier after a few days CloudFront to serve HLS transcoding videos from Glacier
  3. Elastic Transcoder to transcode original high-resolution MP4 videos to HLS EBS volumes to host videos and EBS snapshots to incrementally backup original rues after a few days CloudFront to serve HLS transcoded videos from EC2.
  4. A video transcoding pipeline running on EC2 using SOS to distribute tasks and Auto Scaling to adjust the number of nodes depending on the length of the queue E8S volumes to host videos and EBS snapshots to incrementally backup original files after a few days CloudFront to serve HLS transcoded videos from EC2

解析: You shouldn’t and can’t use Glacier as a source of Cloudfront. You need SQS because the videos are uploaded monthly. Using S3 and auto-archiving to Glacier will be much efficient and cost effective process.

4. Your fortune 500 company has under taken a TCO analysis evaluating the use of Amazon S3 versus acquiring more hardware.

The outcome was that all employees would be granted access to use Amazon S3 for storage of their personal documents.

Which of the following will you need to consider so you can set up a solution that incorporates single sign-on from your corporate AD or LDAP directory and restricts access for each user to a designated user folder in a bucket? (Choose 3 Answers)

  1. Setting up a federation proxy or identity provider
  2. Using AWS Security Token Service to generate temporary tokens
  3. Tagging each folder in the bucket
  4. Configuring IAM role
  5. Setting up a matching IAM user for every user in your corporate directory that needs access to a folder in the bucket

5. Your company policies require encryption of sensitive data at rest.

You are considering the possible options for protecting data while storing it at rest on an EBS data volume, attached to an EC2 instance. Which of these options would allow you to encrypt your data at rest? (Choose 3 answers)

  1. Implement third party volume encryption tools
  2. Do nothing as EBS volumes are encrypted by default
  3. Encrypt data inside your applications before storing it on EBS
  4. Encrypt data using native data encryption drivers at the file system level
  5. Implement SSL/TLS for all services running on the server

解析: Not E since SSL/TLS is encryption in transfer (https) and not encryption of sensitive data at rest. And B is just not true. Although you nowadays can add encryption when creating a EBS volume but it is NOT turned on by default.

6. Your company is in the process of developing a next generation pet collar that collects biometric information to assist families with promoting healthy lifestyles for their pets.

Each collar will push 30kb of biometric data In JSON format every 2 seconds to a collection platform that will process and analyze the data providing health trending information back to the pet owners and veterinarians via a web portal. Management has tasked you to architect the collection platform ensuring the following requirements are met.

Provide the ability for real-time analytics of the inbound biometric data. Ensure processing of the biometric data is highly durable, elastic and parallel. The results of the analytic processing should be persisted for data mining.

Which architecture outlined below win meet the initial requirements for the collection platform?

  1. Utilize S3 to collect the inbound sensor data analyze the data from S3 with a daily scheduled Data Pipeline and save the results to a Redshift Cluster.
  2. Utilize Amazon Kinesis to collect the inbound sensor data, analyze the data with Kinesis clients and save the results to a Redshift cluster using EMR.
  3. Utilize SQS to collect the inbound sensor data analyze the data from SQS with Amazon Kinesis and save the results to a Microsoft SQL Server RDS instance.
  4. Utilize EMR to collect the inbound sensor data, analyze the data from EUR with Amazon Kinesis and save me results to DynamoDB.

7. You’ve been brought in as solutions architect to assist an enterprise customer with their migration of an e-commerce platform to Amazon Virtual Private Cloud (VPC).

The previous architect has already deployed a 3-tier VPC.

The configuration is as follows:

VPC vpc-2f8t>C447
IGVV ig-2d8bc445
NACL acl-2080c448

Subnets:

Web server’s subnet-258Dc44d
Application server’s suDnet-248bc44c
Database server’s subnet-9189c6f9

Route Tables:

rrb-218DC449
rtb-238bc44b

Associations:

subnet-258bc44d: rtb-2i8bc449
Subnet-248DC44C rtb-238tX44b
subnet-9189c6f9 rtb-238Dc 44b

You are now ready to begin deploying EC2 instances into the VPC Web servers must have direct access to the internet. Application and database servers cannot have direct access to the Internet.

Which configuration below will allow you the ability to remotely administer your application and database servers, as well as allow these servers to retrieve updates from the Internet?

  • Create a bastion and NAT Instance in subnet-248bc44c and add a route from rtb- 238bc44b to subnet-258bc44d.
  • Add a route from rtb-238bc44d to igw-2d8bc445 and add a bastion and NAT instance within subnet-248bc44c.
  • Create a bastion and NAT Instance In subnet-258bc44d. Add a route from rtb-238bc44b to igw-2d8bc445. And a new NACL that allows access between subnet-258bc44d and subnet-248bc44c.
  • Create a bastion and NAT instance in subnet-258Dc44d and add a route from rtb- 238dc44d to the NAT instance.

8. You require the ability to analyze a customer’s clickstream data on a website so they can do behavioral analysis

Your customer needs to know what sequence of pages and ads their customer clicked on. This data will be used in real time to modify the page layouts as customers click through the site to increase stickiness and advertising click-through. Which option meets the requirements for captioning and analyzing this data?

  • Log clicks in weblogs by URL store to Amazon S3, and then analyze with Elastic MapReduce
  • Push web clicks by session to Amazon Kinesis and analyze behavior using Kinesis workers
  • Write click events directly to Amazon Redshift and then analyze with SQL
  • Publish web clicks by session to an Amazon SQS queue men periodically drain these events to Amazon RDS and analyze with sol

9. You are designing the network infrastructure for an application server in Amazon VPC.

Users will access all the application instances from the Internet as well as from an on-premises network. The on-premises network is connected to your VPC over an AWS Direct Connect link.

How would you design routing to meet the above requirements?

  1. Configure a single routing Table with a default route via the Internet gateway. Propagate a default route via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets.
  2. Configure a single routing table with a default route via the internet gateway Propagate specific routes for the on-premises networks via BGP on the AWS Direct Connect customer router Associate the routing table with all VPC subnets.
  3. Configure a single routing table with two default routes: one to the internet via an Internet gateway the other to the on-premises network via the VPN gateway use this routing table across all subnets in your VPC.
  4. Configure two routing tables one that has a default route via the Internet gateway and another that has a default route via the VPN gateway Associate both routing tables with each VPC subnet.

10. A company is running a batch analysis every hour on their main transactional DB.

The DB is running on an RDS MySQL instance to populate their central Data. Warehouse running on Redshift. During the execution of the batch their transactional applications are very slow. When the batch completes they need to update the top management dashboard with the new data.

The dashboard is produced by another system running on-premises that is currently started when a manually-sent email notifies that an update is required. The on-premises system cannot be modified because is managed by another team.

How would you optimize this scenario to solve performance issues and automate the process as much as possible?

  1. Replace RDS with Redshift for the batch analysis and SNS to notify the on-premises system to update the dashboard
  2. Replace RDS with Redsnift for the oaten analysis and SQS to send a message to the on-premises system to update the dashboard
  3. Create an RDS Read Replica for the batch analysis and SNS to notify me on-premises system to update the dashboard
  4. Create an RDS Read Replica for the batch analysis and SQS to send a message to the on-premises system to update the dashboard.

11. You require the ability to analyze a large amount of data, which is stored on Amazon S3 using Amazon Elastic Map Reduce.

You are using the cc2 8x large Instance type, whose CPUs are mostly idle during processing.

Which of the below would be the most cost efficient way to reduce the runtime of the job?

  1. Create more smaller flies on Amazon S3.
  2. Add additional cc2 8x large instances by introducing a task group.
  3. Use smaller instances that have higher aggregate 1/0 performance.
  4. Create fewer, larger files on Amazon S3.

12. You have an application running on an EC2 Instance which will allow users to download fl ies from a private S3 bucket using a pre-assigned URL.

Before generating the URL the application should verify the existence of the files in S3.

How should the application use AWS credentials to access the S3 bucket securely?

  1. Use the AWS account access Keys the application retrieves the credentials from the source code of the application.
  2. Create an lAM user for the application with permissions that allow list access to the S3 bucket, launch the instance as the lAM user and retrieve the lAM user’s credentials from the EC2 instance user data.
  3. Create an lAM role for EC2 that allows list access to objects in the S3 bucket. Launch the instance with the role, and retrieve the role’s credentials from the EC2 Instance metadata
  4. Create an lAM user for the application with permissions that allow list access to the S3 bucket. The application retrieves the lAM user credentials from a temporary directory with permissions that allow read access only to the application user.

13. You have a video transcoding application running on Amazon EC2.

Each instance pool is a queue to find out which video should be transcoded, and then runs a transcoding process. If this process is interrupted, the video will be transcoded by another instance based on the queuing system.

You have a large backlog of videos which need to be transcoded and would like to reduce this backlog by adding more instances. You will need these instances only until the backlog is reduced.

Which type of Amazon EC2 instances should you use to reduce the backlog in the most cost efficient way?

  1. Reserved instances
  2. Spot instances
  3. Dedicated instances
  4. On-demand instances

14. You are designing an SSL/TLS solution that requires HTTPS clients to be authenticated by the Web server using client certificate authentication.

The solution must be resilient.

Which of the following options would you consider for configuring the web server infrastructure? (Choose 2 answers)

  1. Configure ELB with TCP listeners on TCP/443. And place the Web servers behind it.
  2. Configure your Web servers with EIPS. Place the Web servers in a Route53 Record Set and configure health checks against all Web servers.
  3. Configure ELB with HTTPS listeners, and place the Web servers behind it.
  4. Configure your web servers as the origins for a CloudFront distribution. Use custom SSL certificates on your CloudFront distribution.

解析:需要解释。客户端的授权?

15. You are designing a data leak prevention solution for your VPC environment.

You want your VPC Instances to be able to access software depots and distributions on the Internet for product updates.

The depots and distributions are accessible via third party CONs by their URLs. You want to
explicitly deny any other outbound connections from your VPC instances to hosts on the internet.

Which of the following options would you consider?

  1. Configure a web proxy server in your VPC and enforce URL-based rules for outbound access. Remove default routes.
  2. Implement security groups and configure outbound rules to only permit traffic to software depots.
  3. Move all your instances into private VPC subnets remove default routes from all routing tables and add specific routes to the software depots and distributions only.
  4. Implement network access control lists to all specific destinations, with an Implicit deny as a rule.

解析:需要解释 。only for URL?

16. You are designing a connectivity solution between on-premises infrastructure and Amazon VPC.

Your server’s on-premises will be communicating with your VPC instances. You will be establishing IPSec tunnels over the internet. You will be using VPN gateways and terminating the IPsec tunnels on AWS supported customer gateways.

Which of the following objectives would you achieve by implementing an IPSec tunnel as outlined above? (Choose 4 answers)

  1. End-to-end protection of data in transit
  2. End-to-end Identity authentication
  3. Data encryption across the Internet
  4. Protection of data in transit over the Internet
  5. Peer identity authentication between VPN gateway and customer gateway
  6. Data integrity protection across the Internet

17. Which services allow the customer to retain full administrative privileges of the underlying EC2 instances?

Choose 2 answers

  1. Amazon Relational Database Service
  2. Amazon Elastic Map Reduce
  3. Amazon ElastiCache
  4. Amazon DynamoDB
  5. AWS Elastic Beanstalk

18. Fill in the blanks: The base URI for all requests for instance metadata is ____

  1. http://254.169.169.254/latest/
  2. http://169.169.254.254/latest/
  3. http://127.0.0.1/latest/
  4. http://169.254.169.254/latest/

19. Amazon RDS DB snapshots and automated backups are stored in ____

  1. Amazon S3
  2. Amazon ECS Volume
  3. Amazon RDS
  4. Amazon EMR

20. A company is building a two-tier web application to serve dynamic transaction-based content.

The data tier is leveraging an Online Transactional Processing (OLTP) database.

What services should youleverage to enable an elastic and scalable web tier?

  1. Elastic Load Balancing, Amazon EC2, and Auto Scaling
  2. Elastic Load Balancing, Amazon RDS with Multi-AZ, and Amazon S3
  3. Amazon RDS with Multi-AZ and Auto Scaling
  4. Amazon EC2, Amazon DynamoDB, and Amazon S3

21. Your department creates regular analytics reports from your company’s log files.

All log data is collected in Amazon S3 and processed by daily Amazon Elastic Map Reduce (EMR) jobs that generate daily PDF reports and aggregated tables in CSV format for an Amazon Redshift data warehouse.

Your CFO requests that you optimize the cost structure for this system.

Which of the following alternatives will lower costs without compromising average performance of the system or data integrity for the raw data?

  1. Use reduced redundancy storage (RRS) for all data In S3. Use a combination of Spot Instances and Reserved Instances for Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.
  2. Use reduced redundancy storage (RRS) for PDF and .csv data in S3. Add Spot Instances to EMR jobs. Use Spot Instances for Amazon Redshift.
  3. Use reduced redundancy storage (RRS) for PDF and .csv data In Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.
  4. Use reduced redundancy storage (RRS) for all data in Amazon S3. Add Spot Instances to Amazon EMR jobs. Use Reserved Instances for Amazon Redshift.

解析: Using Reduced Redundancy Storage Amazon S3 stores objects according to their storage class. It assigns the storage class to an object when it is written to Amazon S3 . You can assign objects a specific sto rage class (standard or reduced redundancy) only when you write the objects to an Amazon S3 bucket or when you copy objects that are already stored in Amazon S3. Standard is the default storage class. For information about storage classes, see Object Key and Metadata. In order to reduce storage costs, you can use reduced redundancy storage for noncritical, reproducible data at lower levels of redundancy than Amazon S3 provides with standard storage. The lower level of redundancy results in less durability and availability, but in many cases, the lower costs can make reduced redundancy storage an acceptable storage solution. For example, it can be a costeffective solution for sharing media content that is durably stored elsewhere. It can also make sense if you are storing thumbnails and other resized images that can be easily reproduced from an original image.

22. Your application provides data transformation services

Files containing data to be transformed are first uploaded to Amazon 53 and then transformed by a fleet of spot EC2 instances. Files submitted by your premium customers must be transformed with the highest priority.

How should you implement such a system?

  1. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
  2. Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances
  3. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.
  4. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.

解析:为什么答案不是2?

23. A user has created photo editing software and hosted it on EC2.

The software accepts requests from the user about the photo format and resolution and sends a message to S3 to enhance the picture accordingly.

Which of the below mentioned AWS services will help make a scalable software with the AWS infrastructure in this scenario?

  1. AWS Simple Notification Service
  2. AWS Simple Queue Service
  3. AWS Elastic Transcoder
  4. AWS Glacier

24. A user wants to achieve High Availability with PostgreSQL DB.

Which of the below mentioned functionalities helps achieve HA?

  1. Multi AZ
  2. Read Replica
  3. Multi region
  4. PostgreSQL does not support HA

解析: The Multi AZ feature allows the user to achieve High Availability. For Multi AZ, Amazon RDS automatically provisions and maintains a synchronous “standby” replica in a different Availability Zone. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Welcome.html

25. After setting up several database instances in Amazon Relational Database Service (Amazon RDS) you decide that you need to track the performance and health of your databases.

How can you do this?

  1. Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB snapshot, DB parameter group, or DB security group.
  2. Use the free Amazon CloudWatch service to monitor the performance and health of a DB instance.
  3. All of the items listed will track the performance and health of a database.
  4. View, download, or watch database log files using the Amazon RDS console or Amazon RDS APIs. You can also query some database log files that are loaded into database tables.

解析: Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks. There are several ways you can track the performance and health of a database or a DB instance. You can: – Use the free Amazon CloudWatch service to monitor the performance and health of a DB instance. – Subscribe to Amazon RDS events to be notified when changes occur with a DB instance, DB snapshot, DB parameter group, or DB security group. – View, download, or watch database log files using the Amazon RDS console or Amazon RDS APIs. – You can also query some database log files that are loaded into database tables. – Use the AWS CloudTrail service to record AWS calls made by your AWS account. The calls are recorded in log files and stored in an Amazon S3 bucket. Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Monitoring.html

26. An accountant asks you to design a small VPC network for him and, due to the nature of his business, just needs something where the workload on the network will be low, and dynamic data will be accessed infrequently.

Being an accountant, low cost is also a major factor.

Which EBS volume type would best suit his requirements?

  1. Magnetic
  2. Any, as they all perform the same and cost the same.
  3. General Purpose (SSD)
  4. Magnetic or Provisioned IOPS (SSD)

解析: You can choose between three EBS volume types to best meet the needs of their workloads: General Purpose (SSD), Provisioned IOPS (SSD), and Magnetic. General Purpose (SSD) is the new, SSD-backed, general purpose EBS volume type that we recommend as the default choice for customers. General Purpose (SSD) volumes are suitable for a broad range of workloads, including small to medium sized databases, development and test environments, and boot volumes. Provisioned IOPS (SSD) volumes offer storage with consistent and low-latency performance, and are designed for I/O intensive applications such as large relational or NoSQL databases. Magnetic volumes provide the lowest cost per gigabyte of all EBS volume types. Magnetic volumes are ideal for workloads where data is accessed infrequently, and applications where the lowest storage cost is important. Reference: https://aws.amazon.com/ec2/faqs/

27. When you put objects in Amazon S3, what is the indication that an object was successfully stored?

  1. A HTTP 200 result code and MDS checksum, taken together, indicate that the operation was successful.
  2. Amazon S3 is engineered for 99.999999999% durability. Therefore there is no need to confirm that data was inserted.
  3. A success code is inserted into the S3 object metadata.
  4. Each S3 account has a special bucket named _s3_1ogs. Success codes are written to this bucket with a timestamp and checksum

28. You are signed in as root user on your account but there is an Amazon S3 bucket under your account that you cannot access.

What is a possible reason for this?

  1. An IAM user assigned a bucket policy to an Amazon S3 bucket and didn’t specify the root user as a principal
  2. The S3 bucket is full.
  3. The S3 bucket has reached the maximum number of objects allowed.
  4. You are in the wrong availability zone

解析: With IAM, you can centrally manage users, security credentials such as access keys, and permissions that control which AWS resources users can access. In some cases, you might have an IAM user with full access to IAM and Amazon S3. If the IAM user assigns a bucket policy to an Amazon S3 bucket and doesn’t specify the root user as a principal, the root user is denied access to that bucket. However, as the root user, you can still access the bucket by modifying the bucket policy to allow root user access. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/iam-troubleshooting.html#testing2

29. You have decided to change the instance type for instances running in your application tier that is using Auto Scaling.

In which area below would you change the instance type definition?

  1. Auto Scaling policy
  2. Auto Scaling group
  3. Auto Scaling tags
  4. Auto Scaling launch configuration

30. You have launched an Amazon Elastic Compute Cloud (EC2) instance into a public subnet with a primary private I P address assigned.

An internet gateway is attached to the VPC, and the public route table is configured to send all Internet-based traffic to the Internet gateway. The instance security group is set to allow all outbound traffic but cannot access the internet.

Why is the Internet unreachable from this instance?

  1. The instance does not have a public IP address.
  2. The internet gateway security group must allow all outbound traffic.
  3. The instance security group must allow all inbound traffic.
  4. The instance “Source/Destination check” property must be enabled.

31. You have set up an Elastic Load Balancer (ELB) with the usual default settings, which route each request independently to the application instance with the smallest load.

However, someone has asked you to bind a user’s session to a specific application instance so as to ensure that all requests coming from the user during the session will be sent to the same application instance.

AWS has a feature to do this. What is it called?

  1. Connection draining
  2. Proxy protocol
  3. Tagging
  4. Sticky session

解析:

  • Connection draining 你可以动态的从ELB的目标集群去动态的增加或移除instance, 增加还好说但移除会稍微有些复杂。当你从ELB de-register一个instance的时候,毫无疑问,ELB将不会在把新接受到的请求转发给这台instance去处理,但是已经接受到的请求呢?一个完整的请求都包含一个request和一个response,当你de-register一个instance的时候,如果一个请求已经接受但还没来的及返回response,如果你暴力的直接close掉ELB-to-server的connection,则这个请求虽然被处理了但client端无法接受到response。所以Connection draining就是用来解决这一问题的,它允许你去设置一个时间,在这个时间内,即使你de-register了instance但是还保持连接存在,知道超过这个时间才close connection.
  • Sticky session: ELB使用最基本的轮询均衡负载算法,这块会有一个问题。一个客户的2个请求会被分配到不同的机器上,这会导致session丢失。例如:你的登陆请求被ELB分配给了机器A, 机器A在session中保存了你的登陆信息,但是你请求其他页面的请求被ELB分配给了机器B而机器B的session中没有你的登陆信息,则会跳转登陆页面让你再次登陆。Sticky session可以保证相同客户的请求始终被分配到一台固定的server直到session过期或者cookie被清空。

32. You have a web application running on six Amazon EC2 instances, consuming about 45% of resources on each instance.

You are using auto-scaling to make sure that six instances are running at all times. The number of requests this application processes is consistent and does not experience spikes. The application is critical to your business and you want high availability at all times. You want the load to be distributed evenly between all instances. You also want to use the same Amazon Machine Image (AMI) for all instances.

Which of the following architectural choices should you make?

  1. Deploy 6 EC2 instances in one availability zone and use Amazon Elastic Load Balancer.
  2. Deploy 3 EC2 instances in one region and 3 in another region and use Amazon Elastic Load Balancer.
  3. Deploy 3 EC2 instances in one availability zone and 3 in another availability zone and use Amazon Elastic Load Balancer.
  4. Deploy 2 EC2 instances in three regions and use Amazon Elastic Load Balancer.

解析: A load balancer accepts incoming traffic from clients and routes requests to its registered EC2 instances in one or more Availability Zones. http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/how-elbworks.html Updated Security Whitepaper link: https://d0.awsstatic.com/whitepapers/aws-security-whitepaper.pdf

33. Which of the following items are required to allow an application deployed on an EC2 instance to write data to a DynamoDB table?

Assume that no security keys are allowed to be stored on the EC2 instance. (Choose 2 answers)

  1. Create an IAM Role that allows write access to the DynamoDB table.
  2. Add an IAM Role to a running EC2 instance.
  3. Create an IAM User that allows write access to the DynamoDB table.
  4. Add an IAM User to a running EC2 instance.
  5. Launch an EC2 Instance with the IAM Role included in the launch

解析: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/TicTacToe.Phase3.html

34. what happens to the data on any ephemeral store volumes?

  1. Data is automatically saved in an EBS volume.
  2. Data is unavailable until the instance is restarted.
  3. Data will be deleted and will no longer be accessible.
  4. Data is automatically saved as an EBS snapshot.

解析: An “EBS-backed” instance is an EC2 instance which uses an EBS volume as it’s root device. An EBS volume behaves like a raw, unformatted, external block device that you can attach to a single instance and are not physically attached to the Instance host computer (more like a network attached storage). The volume persists independently from the running life of an instance. After an EBS volume is attached to an instance, you can use it like any other physical hard drive. You can also detach an EBS volume from one instance and attach it to another instance. EBS volumes can also be created as encrypted volumes using the Amazon EBS encryption feature.

35. A user is aware that a huge download is occurring on his instance.

He has already set the Auto Scaling policy to increase the instance count when the network I/O increases beyond a certain limit.

How can the user ensure that this temporary event does not result in scaling?

  1. The network I/O are not affected during data download
  2. The policy cannot be set on the network I/O
  3. There is no way the user can stop scaling as it is already configured
  4. Suspend scaling

解析: The user may want to stop the automated scaling processes on the Auto Scaling groups either to perform manual operations or during emergency situations. To perform this, the user can suspend one or more scaling processes at any time. Once it is completed, the user can resume all the suspended processes. Reference: http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html

36. In AWS, which security aspects are the customer’s responsibility?

Choose 4 answers

  1. Security Group and ACL (Access Control List) settings
  2. Decommissioning storage devices
  3. Patch management on the EC2 instance’s operating system
  4. Life-cycle management of IAM credentials
  5. Controlling physical access to compute resources
  6. Encryption of EBS (Elastic Block Storage) volume

解析: http://media.amazonwebservices.com/AWS_Security_Best_Practices.pdf

37. Which of the following are characteristics of Amazon VPC subnets?

Choose 2 answers

  1. Each subnet spans at least 2 Availability Zones to provide a high-availability environment.
  2. Each subnet maps to a single Availability Zone.
  3. CIDR block mask of/25 is the smallest range supported.
  4. By default, all subnets can route between each other, whether they are private or public.
  5. Instances in a private subnet can communicate with the Internet only if they have an Elastic IP.

解析: Even though we know the right Answers it is sometimes good to know why the other Answers are wrong. A: Is wrong because a subnet maps to a single AZ. C: Is wrong because /28 is the smallest subnet, amazon takes first four and last addresses per subnet. E: Is wrong because a private subnet needs a NAT appliance.

38. Which technique can be used to integrate AWS IAM (Identity and Access Management) with an on-premise LDAP (Lightweight Directory Access Protocol) directory service?

  1. Use an IAM policy that references the LDAP account identifiers and the AWS credentials.
  2. Use SAML (Security Assertion Markup Language) to enable single sign-on between AWS and LDAP.
  3. Use AWS Security Token Service from an identity broker to issue short-lived AWS credentials.
  4. Use IAM roles to automatically rotate the IAM credentials when LDAP credentials are updated.
  5. Use the LDAP credentials to restrict a group of users from launching specific EC2 instance types

解析: https://d0.awsstatic.com/whitepapers/aws-whitepaper-single-sign-on-integrating-aws-openldap-and-shibboleth.pdf

39. You have been asked to tighten up the password policies in your organization after a serious security breach, so you need to consider every possible security measure.

Which of the following is not an account password policy for IAM Users that can be set?

  1. Force IAM users to contact an account administrator when the user has allowed his or her password to expire.
  2. A minimum password length.
  3. Force IAM users to contact an account administrator when the user has entered his password incorrectly.
  4. Prevent IAM users from reusing previous passwords.

解析: IAM users need passwords in order to access the AWS Management Console. (They do not need passwords if they will access AWS resources programmatically by using the CLI, AWS SDKs, or the APIs.) You can use a password policy to do these things: – Set a minimum password length. – Require specific character types, including uppercase letters, lowercase letters, numbers, and non-alphanumeric characters. Be sure to remind your users that passwords are case sensitive. – Allow all IAM users to change their own passwords. – Require IAM users to change their password after a specified period of time (enable password expiration). – Prevent IAM users from reusing previous passwords. – Force IAM users to contact an account administrator when the user has allowed his or her password to expire. Reference: http://docs.aws.amazon.com/IAM/latest/UserGuide/Using_ManagingPasswordPolicies.html

40. You need to develop and run some new applications on AWS and you know that Elastic Beanstalk and CloudFormation can both help as a deployment mechanism for a broad range of AWS resources.

Which of the following statements best describes the differences between Elastic Beanstalk and CloudFormation?

  1. Elastic Beanstalk uses Elastic load balancing and CloudFormation doesn’t.
  2. CloudFormation is faster in deploying applications than Elastic Beanstalk.
  3. Elastic Beanstalk is faster in deploying applications than CloudFormation.
  4. CloudFormation is much more powerful than Elastic Beanstalk, because you can actually design and script custom resources

解析: These services are designed to complement each other. AWS Elastic Beanstalk provides an environment to easily develop and run applications in the cloud. It is integrated with developer tools and provides a one-stop experience for you to manage the lifecycle of your applications. AWS CloudFormation is a convenient deployment mechanism for a broad range of AWS resources. It supports the infrastructure needs of many different types of applications such as existing enterprise applications, legacy applications, applications built using a variety of AWS resources and container-based solutions (including those built using AWS Elastic Beanstalk). AWS CloudFormation introduces two new concepts: The template, a JSON-format, text-based file that describes all the AWS resources you need to deploy to run your application and the stack, the set of AWS resources that are created and managed as a single unit when AWS CloudFormation instantiates a template. Reference: http://aws.amazon.com/cloudformation/faqs/

41. Your application provides data transformation services. Files containing data to be transformed are first uploaded to Amazon S3 and then transformed by a fleet of spot EC2 instances.

Files submitted by your premium customers must be transformed with the highest priority.

How should you implement such a system?

  1. Use a DynamoDB table with an attribute defining the priority level. Transformation instances will scan the table for tasks, sorting the results by priority level.
  2. Use Route 53 latency based-routing to send high priority tasks to the closest transformation instances.
  3. Use two SQS queues, one for high priority messages, the other for default priority. Transformation instances first poll the high priority queue; if there is no message, they poll the default priority queue.
  4. Use a single SQS queue. Each message contains the priority level. Transformation instances poll high-priority messages first.

42. Your organization is in the business of architecting complex transactional databases.

For a variety of reasons, this has been done on EBS.

What is AWS’s recommendation for customers who have architected databases using EBS for backups?

  1. Backups to Amazon S3 be performed through the database management system.
  2. Backups to AWS Storage Gateway be performed through the database management system.
  3. If you take regular snapshots no further backups are required.
  4. Backups to Amazon Glacier be performed through the database management system.

43. Your team has a tomcat-based Java application you need to deploy into development, test and production environments.

After some research, you opt to use Elastic Beanstalk due to its tight integration with your developer tools and RDS due to its ease of management. Your QA team lead points out that you need to roll a sanitized set of production data into your environment on a nightly basis.
Similarly, other software teams in your org want access to that same restored data via their EC2 instances in your VPC .The optimal setup for persistence and security that meets the above requirements would be the following.

  1. Create your RDS instance as part of your Elastic Beanstalk definition and alter its security group to allow access to it from hosts in your application subnets.
  2. Create your RDS instance separately and add its IP address to your application’s DB connection strings in your code Alter its security group to allow access to it from hosts within your VPC’s IPaddress block.
  3. Create your RDS instance separately and pass its DNS name to your app’s DB connection string as an environment variable. Create a security group for client machines and add it as a valid source for DB traffic to the security group of the RDS instance itself.
  4. Create your RDS instance separately and pass its DNS name to your’s DB connection string as an environment variable Alter its security group to allow access to It from hosts In your application subnets.

44. Your web application front end consists of multiple EC2 instances behind an Elastic Load Balancer.

You configured ELB to perform health checks on these EC2 instances, if an instance fails to pass health checks, which statement will be true?

  1. The instance gets terminated automatically by the ELB.
  2. The instance gets quarantined by the ELB for root cause analysis.
  3. The instance is replaced automatically by the ELB.
  4. The ELB stops sending traffic to the instance that failed its health check.

45. A company has a workflow that sends video files from their on-premise system to AWS for transcoding.

They use EC2 worker instances that pull transcoding jobs from SQS. Why is SQS an appropriate service for this scenario?

  1. SQS guarantees the order of the messages.
  2. SQS synchronously provides transcoding output.
  3. SQS checks the health of the worker instances.
  4. SQS helps to facilitate horizontal scaling of encoding tasks.

解析: A – messages are out of order, not ordered B – its a messaging system, not trancoding system C – it does check help of external clients, its a pull system D – only logical answer

46. A newspaper organization has a on-premises application which allows the public to search its back catalogue and retrieve individual newspaper pages via a website written in Java.

They have scanned the old newspapers into JPEGs (approx 17TB) and used Optical Character Recognition (OCR) to populate a commercial search product. The hosting platform and software are now end of life and the organization wants to migrate Its archive to AWS and produce a cost efficient architecture and still be designed for availability and durability Which is the most appropriate?

  1. Use S3 with reduced redundancy to store and serve the scanned files, install the commercial search application on EC2 Instances and configure with auto-scaling and an Elastic Load Balancer.
  2. Model the environment using CloudFormation use an EC2 instance running Apache webserver and an open source search application, stripe multiple standard EBS volumes together to store the JPEGs and search index.
  3. Use S3 with standard redundancy to store and serve the scanned files, use CloudSearch for query processing, and use Elastic Beanstalk to host the website across multiple availability zones.
  4. Use a single-AZ RDS MySQL instance lo store the search index 33d the JPEG images use an EC2 instance to serve the website and translate user queries into SQL.
  5. Use a CloudFront download distribution to serve the JPEGs to the end users and Install the current commercial search product, along with a Java Container Tor the website on EC2 instances and use Route53 with DNS round-robin.

47. A read only news reporting site with a combined web and application tier and a database tier that receives large and unpredictable traffic demands must be able to respond to these traffic fluctuations automatically.

What AWS services should be used meet these requirements?

  1. Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaimg group monitored with CloudWatch. And RDSwith read replicas
  2. Stateful instances for me web and application tier in an autoscaling group monitored with CloudWatch and RDS with read replicas
  3. Stateful instances for the web and application tier in an autoscaling group monitored with CloudWatch. And multi-AZ RDS
  4. Stateless instances for the web and application tier synchronized using ElastiCache Memcached in an autoscaling group monitored with CloudWatch and multi-AZ RDS

解析: Stateless instances for the web and application tier synchronized using Elasticache Memcached in an autoscaimg group monitored with CloudWatch. And RDS with read replicas.

48. What happens to the I/O operations while you take a database snapshot?

  1. I/O operations to the database are suspended for an hour while the backup is in progress.
  2. I/O operations to the database are sent to a Replica (if available) for a few minutes while the backup is in progress.
  3. I/O operations will be functioning normally
  4. I/O operations to the database are suspended for a few minutes while the backup is in progress.

解析: During the backup window, storage I/O may be briefly suspended while the backup process initializes (typically under a few seconds) and you may experience a brief period of elevated latency. There is no I/O suspension for Multi-AZ DB deployments, since the backup is taken from the standby

49. You are tasked with moving a legacy application from a virtual machine running Inside your datacenter to an Amazon VPC.

Unfortunately this app requires access to a number of onpremises services and no one who configured the app still works for your company. Even worse there’s no documentation for it.

What will allow the application running inside the VPC to reach back and access its internal dependencies without being reconfigured? (Choose 3 answers)

  1. An AWS Direct Connect link between the VPC and the network housing the internal services.
  2. An Internet Gateway to allow a VPN connection
  3. An Elastic IP address on the VPC instance
  4. An IP address space that does not conflict with the one on-premises
  5. Entries in Amazon Route 53 that allow the Instance to resolve its dependencies’ IP addresses
  6. A VM Import of the current virtual machine

50. You need to configure an Amazon S3 bucket to serve static assets for your public-facing web application.

Which methods ensure that all objects uploaded to the bucket are set to public read? Choose 2 answers

  1. Set permissions on the object to public read during upload.
  2. Configure the bucket ACL to set all objects to public read.
  3. Configure the bucket policy to set all objects to public read.
  4. Use AWS Identity and Access Management roles to set the bucket to public read.
  5. Amazon S3 objects default to public read, so no action is needed.

解析: B – incorrect You can use ACLs to grant permissions to individual AWS accounts

51. Your system recently experienced down time during the troubleshooting process.

You found that a new administrator mistakenly terminated several production EC2 instances.

Which of the following strategies will help prevent a similar situation in the future?

The administrator still must be able to:
– launch, start stop, and terminate development resources.
– launch and start production instances.

  1. Create an IAM user, which is not allowed to terminate instances by leveraging production EC2 termination protection.
  2. Leverage resource based tagging along with an IAM user, which can prevent specific users from terminating production EC2 resources.
  3. Leverage EC2 termination protection and multi-factor authentication, which together require users to authenticate before terminating EC2 instances
  4. Create an IAM user and apply an IAM role which prevents users from terminating production EC2 instances.

52. A web-startup runs its very successful social news application on Amazon EC2 with an Elastic Load Balancer, an Auto-Scaling group of Java/Tomcat application-servers, and DynamoDB as data store.

The main web-application best runs on m2 x large instances since it is highly memory- bound. Each new deployment requires semi-automated creation and testing of a new AMI for the application servers which takes quite a while ana is therefore only done once per week.

Recently, a new chat feature has been implemented in nodejs and wails to be integrated in the architecture. First tests show that the new component is CPU bound Because the company has some experience with using Chef, they decided to streamline the deployment process and use AWS Ops Works as an application life cycle tool to simplify management of the application and reduce the deployment cycles.

What configuration in AWS Ops Works is necessary to integrate the new chat module in
the most cost-efficient and flexible way?

  1. Create one AWS Ops Works stack, create one AWS Ops Works layer, create one custom recipe
  2. Create one AWS Ops Works stack create two AWS Ops Works layers create one custom recipe
  3. Create two AWS Ops Works stacks create two AWS Ops Works layers create one custom recipe
  4. Create two AWS Ops Works stacks create two AWS Ops Works layers create two custom recipe

53. You are designing Internet connectivity for your VPC.

The Web servers must be available on the Internet. The application must have a highly available architecture.

Which alternatives should you consider? (Choose 2 answers)

  1. Configure a NAT instance in your VPC Create a default route via the NAT instance and associate it with all subnets Configure a DNS A record that points to the NAT instance public IP address.
  2. Configure a CloudFront distribution and configure the origin to point to the private IP addresses of your Web servers Configure a Route53 CNAME record to your CloudFront distribution
  3. Place all your web servers behind ELB Configure a Route53 CNMIE to point to the ELB DNS name.
  4. Assign EIPs to all web servers. Configure a Route53 record set with all EIPs. With health checks and DNS failover.
  5. Configure ELB with an EIP. Place all your Web servers behind ELB. Configure a Route53 A record that points to the EIP.

54. You are migrating a legacy client-server application to AWS.

The application responds to a specific DNS domain (e g www example com) and has a 2-tier architecture, with multiple application servers and a database server Remote clients use TCP to connect to the application servers. The application servers need to know the IP address of the clients in order to function properly and are currently taking that information from the TCP socket

A Multi-AZ RDS MySQL instance will be used for the database. During the migration you can change the application code but you have to file a change request.

How would you implement the architecture on AWS In order to maximize scalability and high ability?

  1. File a change request to implement Proxy Protocol support In the application Use an EL8 with a TCP Listener and Proxy Protocol enabled to distribute load on two application servers in different AZs.
  2. File a change request to Implement Cross-Zone support in the application Use an EL8 with a TCP Listener and Cross-Zone Load Balancing enabled, two application servers in different AZs.
  3. File a change request to implement Latency Based Routing support in the application Use Route 53 with Latency Based Routing enabled to distribute load on two application servers in different AZs.
  4. File a change request to implement Alias Resource support in the application Use Route 53 Alias Resource Record to distribute load on two application servers in different AZs.

55. You have deployed a three-tier web application in a VPC with a CIOR block of 10 0 0 0/28.

You initially deploy two web servers, two application servers, two database servers and one NAT instance tor a total of seven EC2 instances The web. Application and database servers are deployed across two availability zones (AZs). You also deploy an ELB in front of the two web servers, and use Route53 for DNS Web (raffle gradually increases in the first few days following the deployment, so you attempt to double the number of instances in each tier of the application to handle the new load unfortunately some of these new instances fail to launch.

Which of the following could De the root caused? (Choose 2 answers)

  1. The Internet Gateway (IGW) of your VPC has scaled-up adding more instances to handle the traffic spike, reducing the number of available private IP addresses for new instance launches.
  2. AWS reserves one IP address In each subnet’s CIDR block for Route53 so you do not have enough addresses left to launch all of the new EC2 instances.
  3. AWS reserves the first and the last private IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.
  4. The ELB has scaled-up. Adding more instances to handle the traffic reducing the number of available private IP addresses for new instance launches.
  5. AWS reserves the first tour and the last IP address in each subnet’s CIDR block so you do not have enough addresses left to launch all of the new EC2 instances.

56. A customer has a single 3-TB volume on-premises that is used to hold a large repository of images and print layout files.

This repository is growing at 500 GB a year and must be presented as a single logical volume. The customer is becoming increasingly constrained with their local storage capacity and wants an off-site backup of this data, while maintaining low-latency access to their frequently accessed data.

Which AWS Storage Gateway configuration meets the customer requirements?

  1. Gateway-Cached volumes with snapshots scheduled to Amazon S3
  2. Gateway-Stored volumes with snapshots scheduled to Amazon S3
  3. Gateway-Virtual Tape Library with snapshots to Amazon S3
  4. Gateway-Virtual Tape Library with snapshots to Amazon Glacier

57. An Auto-Scaling group spans 3 AZs and currently has 4 running EC2 instances.

When Auto Scaling needs to terminate an EC2 instance by default, AutoScaling will:

Choose 2 answers

  • Allow at least five minutes for Windows/Linux shutdown scripts to complete, before terminating the instance
  • Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected.
  • Send an SNS notification, if configured to do so.
  • Terminate an instance in the AZ which currently has 2 running EC2 instances.
  • Randomly select one of the 3 AZs, and then terminate an instance in that AZ.

58. If you’re unable to connect via SSH to your EC2 instance, which of the following should you check and possibly correct to restore connectivity?

  1. Adjust Security Group to permit egress traffic over TCP port 443 from your IP.
  2. Configure the IAM role to permit changes to security group settings.
  3. Modify the instance security group to allow ingress of ICMP packets from your IP.
  4. Adjust the instance’s Security Group to permit ingress traffic over port 22 from your IP.
  5. Apply the most recently released Operating System security patches.

59. The one-time payment for Reserved Instances is __________ refundable if the reservation is cancelled.

  1. always
  2. in some circumstances
  3. never

60. A web company is looking to implement an intrusion detection and prevention system into their deployed VPC.

This platform should have the ability to scale to thousands of instances running inside of the VPC.

How should they architect t heir solution to achieve these goals?

  1. Configure an instance with monitoring software and the elastic network interface (ENI) set to promiscuous mode packet sniffing to see an traffic across the VPC,
  2. Create a second VPC and route all traffic from the primary application VPC through the second VPC where the scalable virtualized IDS/IPS platform resides.
  3. Configure servers running in the VPC using the host-based ‘route’ commands to send all traffic through the platform to a scalable virtualized IDS/IPS.
  4. Configure each host with an agent that collects all network traffic and sends that traffic to the IDS/IPS platform for inspection.

61. You are deploying an application to track GPS coordinates of delivery trucks in the United States.

Coordinates are transmitted from each delivery truck once every three seconds. You need to design an architecture that will enable real-time processing of these coordinates from multiple consumers. Which service should you use to implement data ingestion?

  1. Amazon Kinesis
  2. AWS Data Pipeline
  3. Amazon AppStream
  4. Amazon Simple Queue Service

62. You’re running an application on-premises due to its dependency on non-x86 hardware and want to use AWS for data backup.

Your backup application is only able to write to POSIX-compatible block-based storage. You have 140TB of data and would like to mount it as a single folder on your file server. Users must be able to access portions of this data while the backups are taking place. What backup solution would be most appropriate for this use case?

  1. Use Storage Gateway and configure it to use Gateway Cached volumes.
  2. Configure your backup software to use S3 as the target for your data backups.
  3. Configure your backup software to use Glacier as the target for your data backups.
  4. Use Storage Gateway and configure it to use Gateway Stored volumes.

63. Your company has an on-premises multi-tier PHP web application, which recently experienced downtime due to a large burst In web traffic due to a company announcement.

Over the coming days, you are expecting similar announcements to drive similar unpredictable bursts, and are looking to find ways to quickly improve your infrastructures ability to handle unexpected increases in traffic.

The application currently consists of 2 tiers a web tier which consists of a load balancer and several Linux Apache web servers as well as a database tier which hosts a Linux server hosting a MySQL database.

Which scenario below will provide full site functionality, while helping to improve the ability of your application in the short timeframe required?

  1. Offload traffic from on-premises environment Setup a CloudFront distribution and configure CloudFront to cache objects from a custom origin Choose to customize your object cache behavior, and select a TTL that objects should exist in cache.
  2. Migrate to AWS Use VM import ‘Export to quickly convert an on-premises web server to an AMI create an Auto Scaling group, which uses the imported AMI to scale the web tier based on incoming traffic Create an RDS read replica and setup replication between the RDS instance and on-premises MySQL server to migrate the database.
  3. Failover environment: Create an S3 bucket and configure it tor website hosting Migrate your DNS to Route53 using zone (lie import and leverage Route53 DNS failover to failover to the S3 hosted website.
  4. Hybrid environment Create an AMI which can be used of launch web serfers in EC2 Create an Auto Scaling group which uses the * AMI to scale the web tier based on incoming traffic Leverage Elastic Load Balancing to balance traffic between on-premises web servers and those hosted in AWS.

64. Your company has been storing a lot of data in Amazon Glacier and has asked for an inventory of what is in there exactly.

So you have decided that you need to download a vault inventory.

Which of the following statements is incorrect in relation to Vault Operations in Amazon Glacier?

  1. You can use Amazon Simple Notification Service (Amazon SNS) notifications to notify you when the job completes.
  2. A vault inventory refers to the list of archives in a vault.
  3. You can use Amazon Simple Queue Service (Amazon SQS) notifications to notify you when the job completes.
  4. Downloading a vault inventory is an asynchronous operation.

65. Your company has HQ in Tokyo and branch offices all over the world and is using a logistics software with a multi-regional deployment on AWS in Japan, Europe and USA.

The logistic software has a 3-tier architecture and currently uses MySQL 5.6 for data persistence. Each region has deployed its own database. In the HQ region you run an hourly batch process reading data from every region to compute cross-regional reports that are sent by email to all offices this batch process must be completed as fast as possible to quickly optimize logistics how do you build the database architecture in order to meet the requirements’?

  1. For each regional deployment, use RDS MySQL with a master in the region and a read replica in the HQ region
  2. For each regional deployment, use MySQL on EC2 with a master in the region and send hourly EBS snapshots to the HQ region
  3. For each regional deployment, use RDS MySQL with a master in the region and send hourly RDS snapshots to the HQ region
  4. For each regional deployment, use MySQL on EC2 with a master in the region and use S3 to copy data files hourly to the HQ region
  5. Use Direct Connect to connect all regional MySQL deployments to the HQ region and reduce network latency for the batch process
Posted in AWS

发表评论

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据