Skip to main content

Time to Fire the Sysadmin? What We're Doing About the AWS Outage


The downtime brought about by the massive failure at Amazon Web Services has now agonizingly stretched well into a second day, causing us to question practically everything we thought we knew about hosting web applications.

Lurking behind it all is the nagging anxiety that maybe we should go back to a simpler time, when men were men, and ran their own machine rooms.  Then come the flashbacks, and we remember what that was really like.  All it took was a careless backhoe operator…

The truth is that for all the pain and, yes, embarrassment, the answer is not to turn back the clock.  The answer is, as it almost always is in these situations, to use this experience to build something better.  Something that leverages the best of the new tools, with a deeper understanding of their risks.

Another way to put it: Don't fire the sysadmin while he's trying to fix the servers.  Keep cool and get the crisis resolved, then do a full post-mortem to squeeze every drop of learning you can from the experience.

In that spirit, we wanted to share our plans for preventing and recovering from future outages.  The truth is these have been in progress for a while, but you can bet they'll now be exposed to a whole new level of scrutiny and outrank all other priorities until they're complete.

Bad Case Scenario

For starters, we will of course continue to follow the recommended Amazon Web Services practice by maintaining backups that replicate our data and software across multiple Availability Zones.  Availability Zones are, according to AWS documentation, designed so they do not share common points of failure, such as generators and cooling equipment.  In addition, they are physically separate, so "even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone."

Moving data across availability zones is fast, and is supported by powerful AWS snapshotting capabilities, so it's possible to make very frequent backups, and to recover quickly.

In the Bad Case Scenario where an Availability Zone fails, we'll be able to spool up new application and database servers in a separate Availability Zone, recover files from a recent backup snapshot stored in Amazon's highly stable S3 infrastructure, connect to an already-running live database backup, and be back online in less than a half an hour.  We'll use the AWS Elastic IP feature to eliminate DNS propagation issues which can sometimes delay restoration of access for some users.

But as we've seen, redundancy across Zones is not always enough.  While AWS maintains this is extraordinarily unlikely, the recent outage took out multiple Zones, which brings us to the next scenario.

Worse Case Scenario (like the current one)

In addition to maintaining redundancy across Availability Zones, we will also do so across Regions.  AWS maintains five Regions (each containing multiple Availability Zones) around the world.  There's one on the East Coast and another on the West Coast of the US.  To protect against the failure of an entire Region, we will maintain a live database backup on the opposite coast, and a complete file backup updated at least nightly (unfortunately, AWS snapshots cannot currently be made across Regions).

If all the Availability Zones on one coast go down, we'll start up pre-configured application and database servers on the other one, connect up to the live database backup, and restore from the nightly file backup.  It will take somewhat longer, since snapshots and the Elastic IP feature will not be available.  Also, content added since the last backup will not be available until the other Region is restored.  (Even in the current case, it does not appear data has been permanently lost).

Even so, we should be able to get back online within an hour.

Worst Case Scenario

So what if something happens to AWS as a whole, or at least they somehow lose both coasts?  For that case, we'll maintain what's called a company-diverse backup plan.  That is, we'll maintain server infrastructure with another provider completely separate from Amazon.  We'll keep nightly database dumps and file backups on servers there.  If the AWS data is truly gone from both coasts with no warning, there's the potential that up to a day's data could be lost, and there would be some delays in restoring service, since we're dealing with real, rather than virtual hardware, and real IP address changes, but it should still be possible to be online within a day.

Then we just have to worry about the guy with the backhoe.



Comments

Popular posts from this blog

AppGap: Tizra more than just a "great tool for content sellers"

Bill Ives has been writing about knowledge management since the days when for most people that meant color coding your files, so we were really pleased when he agreed to evaluate Tizra Publisher in The AppGap , a blog on the future of work. We were even more pleased when he said "I see this service as a great tool for content sellers." But we thought his keenest insight was into applications beyond traditional publishing... [Tizra Publisher] can also be a useful content distribution system for enterprises that need to manage the presentation of their information. This will be especially useful for verticals with a lot of internal content such as legal firms, pharma, and other research oriented enterprises. Ives saw Tizra's combination of easy and yet precisely controlled content distribution as key for these users, and others needing to share marketing and technical information. Read the full review .

What Einstein Taught Us About Searching Inside Publications

When the Collected Papers of Albert Einstein went live on Tizra a few years ago, it was a huge step forward. Suddenly, anyone anywhere could search and access the output of one of the 20th Century’s great minds…from love letters to breakthrough articles that changed how we think about the nature of time and space. But the project also showed the limits of traditional tools for searching within large, complex publications. These limits sparked a collaboration with Princeton University Press and Einstein Papers Project editors, which this year resulted in a dynamic new search interface, which we’ll be demonstrating in a  Webcast Friday, December 15 at 1pm ET . The interface not only makes it easier for Einstein researchers to home in on relevant content on both mobile devices and desktops, it points the way toward faster, better searching within a wide range of publication types, from reference books to periodicals, technical documentation and standards to textbooks. Click To Re

How G-W Increases Customer Satisfaction (and profits) with DIGITAL FIRST Content

Drag-and-drop activity from one of G-W's Tizra-hosted digital textbooks shows how digital-first puts readers first. For most publishers moving to digital, the best strategy is usually to start with their existing print catalog.  Consultants may deride this as a “shovelware” approach (meaning you’re just shoveling print content online), but the truth is it’s the shortest path to getting live with a good quality product, and beginning the process of learning what works and doesn’t work for your readers. Once this is underway, however, it makes sense to start thinking more strategically about  digital first  content.  What do we mean by digital first? We think of digital first content as material that was born digital, i.e, developed from the outset to take full advantage of the possibilities offered by digital delivery, rather than tacking digital features on as an afterthought. Digital-first content enables publishers to do more for the customer.  For example, educational p