It also creates a couple more: how do we get the uploaded files to S3 and how do we make Drupal serve them from there? If we put all of the files in one central place, that solves one problem. Thankfully, Amazon has a service that is meant specifically for storing files: S3. Obviously, storing them on the server they were uploaded to is out of the question since they will only be accessible if someone connects to that server and they will be lost when the instance terminates if you are just using the instance storage. Since there can be multiple webservers, handling file uploads can become interesting. File Storage: Simple Storage Service (S3) Just use that endpoint for the database server hostname along with the credentials that you created and you're good to go. Once it has been provisioned, Amazon will provide you with an "endpoint" for the instance. When you create the instance, you setup a username, password, and database name. Connecting to an RDS instance from Drupal (or anything else) is as easy as using any other database server. We can run as many webserver instances as needed to handle traffic. By having the database server separate from the webserver, we are not limited to a single server. In short, you don't have to worry about managing a separate database server.įor Natgeo, we used a small MySQL RDS instance. Some of the key features include the ability to dynamically increase capacity, automatic backups and multi-zone failover (if needed). They come with sensible security settings and default configurations based on the instance size but much of that can easily be changed later if needed. RDS is Amazon's fully managed database service. Database: Relational Database Service (RDS) One key point is that aside from unimportant things like temporary files, nothing should be written to the instance once it is operational. As an example, a copy of settings.php configured for production use is stored as "" in version control and that becomes settings.php on the production instances. A little bit of scripting allows us to store things that would not ordinarily be versioned. Since you can't run a Drupal site without any code, we pull that in from our version control repo at startup. The EC2 instance (or instances) run PHP and a webserver and are what actually runs the website. Since then, due to issues with reliability (we had an EBS volume become corrupted after a major AWS outage) we have move to a system that allows us to set everything up in the instance storage at boot time (more on that later). When we originally setup the cloud hosting solution for Nat Geo, we were using instances with EBS volumes. Unless you create a volume using Amazon's Elastic Block Store (EBS) service and mount that, anything stored on the instance will be lost when it is terminated. One key difference is that they do not include permanent storage. They come in a variety of capacities and are basically a blank slate to do whatever you need with. Web Server(s): Elastic Compute Cloud (EC2)ĮC2 instances are similar to a VPS that you might find elsewhere. Some parts of the solution have evolved and will continue to evolve as technology changes and we find better ways of doing things but the core parts have remained relatively constant. We evaluated a number of different options but in the end, the one that won out was Amazon Web Services. In order to salvage the project and the relationship with National Geographic Networks, we had to quickly come up with a solution that could handle massive traffic spikes. Despite all this, the server really wasn't meant to handle thousands of people all trying to access a site at the time. To keep our clients happy, we keep everything properly tuned and keep the number of accounts per server low. Because hosting isn't our primary business, we are more concerned with our clients having a great experience rather than how much money we can make from a single server. As part of the relaunch, we deployed the new site on one of our servers. Like many other hosts, this company crammed as many accounts as they could on a server, often resulting in slow load times and database connection problems. Up until this point, the Ad Sales division had been hosting their websites on a popular shared hosting provider. The resulting traffic spike brought our server to it's knees. To help encourage them, they sent out an e-blast to thousands of ad agencies around the world that offered a chance to win a free iPad to first 100 visitors. When Commercial Progression launched the new Ad Sales website for National Geographic Networks, they loved it so much that they wanted everyone to come and see it.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |