Skip to main content

Time to Fire the Sysadmin? What We're Doing About the AWS Outage


The downtime brought about by the massive failure at Amazon Web Services has now agonizingly stretched well into a second day, causing us to question practically everything we thought we knew about hosting web applications.

Lurking behind it all is the nagging anxiety that maybe we should go back to a simpler time, when men were men, and ran their own machine rooms.  Then come the flashbacks, and we remember what that was really like.  All it took was a careless backhoe operator…

The truth is that for all the pain and, yes, embarrassment, the answer is not to turn back the clock.  The answer is, as it almost always is in these situations, to use this experience to build something better.  Something that leverages the best of the new tools, with a deeper understanding of their risks.

Another way to put it: Don't fire the sysadmin while he's trying to fix the servers.  Keep cool and get the crisis resolved, then do a full post-mortem to squeeze every drop of learning you can from the experience.

In that spirit, we wanted to share our plans for preventing and recovering from future outages.  The truth is these have been in progress for a while, but you can bet they'll now be exposed to a whole new level of scrutiny and outrank all other priorities until they're complete.

Bad Case Scenario

For starters, we will of course continue to follow the recommended Amazon Web Services practice by maintaining backups that replicate our data and software across multiple Availability Zones.  Availability Zones are, according to AWS documentation, designed so they do not share common points of failure, such as generators and cooling equipment.  In addition, they are physically separate, so "even extremely uncommon disasters such as fires, tornados or flooding would only affect a single Availability Zone."

Moving data across availability zones is fast, and is supported by powerful AWS snapshotting capabilities, so it's possible to make very frequent backups, and to recover quickly.

In the Bad Case Scenario where an Availability Zone fails, we'll be able to spool up new application and database servers in a separate Availability Zone, recover files from a recent backup snapshot stored in Amazon's highly stable S3 infrastructure, connect to an already-running live database backup, and be back online in less than a half an hour.  We'll use the AWS Elastic IP feature to eliminate DNS propagation issues which can sometimes delay restoration of access for some users.

But as we've seen, redundancy across Zones is not always enough.  While AWS maintains this is extraordinarily unlikely, the recent outage took out multiple Zones, which brings us to the next scenario.

Worse Case Scenario (like the current one)

In addition to maintaining redundancy across Availability Zones, we will also do so across Regions.  AWS maintains five Regions (each containing multiple Availability Zones) around the world.  There's one on the East Coast and another on the West Coast of the US.  To protect against the failure of an entire Region, we will maintain a live database backup on the opposite coast, and a complete file backup updated at least nightly (unfortunately, AWS snapshots cannot currently be made across Regions).

If all the Availability Zones on one coast go down, we'll start up pre-configured application and database servers on the other one, connect up to the live database backup, and restore from the nightly file backup.  It will take somewhat longer, since snapshots and the Elastic IP feature will not be available.  Also, content added since the last backup will not be available until the other Region is restored.  (Even in the current case, it does not appear data has been permanently lost).

Even so, we should be able to get back online within an hour.

Worst Case Scenario

So what if something happens to AWS as a whole, or at least they somehow lose both coasts?  For that case, we'll maintain what's called a company-diverse backup plan.  That is, we'll maintain server infrastructure with another provider completely separate from Amazon.  We'll keep nightly database dumps and file backups on servers there.  If the AWS data is truly gone from both coasts with no warning, there's the potential that up to a day's data could be lost, and there would be some delays in restoring service, since we're dealing with real, rather than virtual hardware, and real IP address changes, but it should still be possible to be online within a day.

Then we just have to worry about the guy with the backhoe.



Comments

Popular posts from this blog

Stanford's HighWire Press Picks Tizra

We're thrilled to announce a new partnership with Stanford University's HighWire Press.  It's exciting not only as an opportunity to work side-by-side with a longtime leader in online publishing, but also as validation of the robustness and flexibility we have worked so hard to build into Tizra.  HighWire has been serving up some of the most prestigious online journals in the world since 1995, and they are extremely selective about the technology they offer their customers. But the real proof of the collaboration's value is the response from the marketplace, with organizations including Project MUSE (Johns Hopkins University Press), Duke University Press , and GeoScienceWorld already signed on in advance of the product's launch.  Clearly, the increased discoverability, ease of use and agility resulting from the collaboration are what publishers—and readers—are looking for. Further details on the partnership are in the news release below.  A PDF version is ...

Leave Web Enough Alone!

Jeremy Zawodny is rightly torqued about the needless complication of tools that purport to help with information sharing. The web's always had that pretty well covered, thanks to the simple magic of the URL. Anything you find, you can bookmark, email, or with a tinyurl , disseminate on a cocktail napkin. If my dear grandfather had been born later, he probably would never have picked up the habit of mailing articles lovingly clipped with a pen knife, and instead would have referred me to his del.icio.us feed. Zawodny points to a bizarre assortment of pop-ups, forms, and other unwelcome surprises that result from the "helpful" new sharing features, and notes... they seem to be placed on the sites under the assumption that I'm too stupid to send email (to the people I presumably email frequently already) with a URL in it... Thanks for the confidence boost. At Tizra, we're more inclined to say thanks to for the opportunity to do better. Our AgilePDF™ , for exampl...

Context is King!

John Blossom's post on traditional portal strategies resonated with my recent thinking about aggregation sites ( Shorelines: portals Passe ). I made his post into a silly slogan for my subject line, but he is making a good case that even in the "piling things up" business, there are potential problems with actually piling them up. Reading it, for a minute, I had a pang about Tizra. You might be able to read it as saying that it's not worth building your own content collection at all, but I don't think that is the practical point for publishers. I think that the notion of stressing context and tuning product offerings to user groups is exactly what we enable with our product and content management tools. You need to have a branded presentation of your content to all your different audiences, and make every audience an offer that they want to buy. That takes a lot of flexibility, which is what we've concentrated on. That flexibility should be on tap, not the en...