Home > amazon, API, cloud computing, datacenter > How SmugMug survived the Amazonpocalypse

How SmugMug survived the Amazonpocalypse

April 24, 2011

tl;dr: Amazon had a major outage last week, which took down some popular websites. Despite using a lot of Amazon services, SmugMug didn’t go down because we spread across availability zones and designed for failure to begin with, among other things.

We’ve known for quite some time that SkyNet was going to achieve sentience and attack us on April 21st, 2011. What we didn’t know is that Amazon’s Web Services platform (AWS) was going to be their first target, and that the attack would render many popular websites inoperable while Amazon battled the Terminators.

Sorry about that, that was probably our fault for deploying SkyNet there in the first place.

We’ve been getting a lot of questions about how we survived (SmugMug was minimally impacted, and all major services remained online during the AWS outage) and what we think of the whole situation. So here goes.

http://jossphoto.smugmug.com/People/People-Digital-Art/2706381_EUKLw#209083325_MaNtP

HOW WE DID IT

We’re heavy AWS users with many petabytes of storage in their Simple Storage Service (S3) and lots of Elastic Compute Cloud (EC2) instances, load balancers, etc. If you’ve ever visited a SmugMug page or seen a photo or video embedded somewhere on the web (and you probably have), you’ve interacted with our AWS-powered services. Without AWS, we wouldn’t be where we are today – outages or not. We’re still very excited about AWS even after last week’s meltdown.

I wish I could say we had some sort of magic bullet that helped us stay alive. I’d certainly share if it I had one. In reality, our stability during this outage stemmed from four simple things:

First, all of our services in AWS are spread across multiple Availability Zones (AZs). We’d use 4 if we could, but one of our AZs is capacity constrained, so we’re mostly spread across three. (I say “one of our” because your “us-east-1b” is likely different from my “us-east-1b” – every customer is assigned to different AZs and the names don’t match up). When one AZ has a hiccup, we simple use the other AZs. Often this is a graceful, but there can be hiccups – there are certainly tradeoffs.

Second, we designed for failure from day one. Any of our instances, or any group of instances in an AZ, can be “shot in the head” and our system will recover (with some caveats – but they’re known, understood, and tested). I wish we could say this about some of our services in our own datacenter, but we’ve learned from our earlier mistakes and made sure that every piece we’ve deployed to AWS is designed to fail and recover.

Third, we don’t use Elastic Block Storage (EBS), which is the main component that failed last week. We’ve never felt comfortable with the unpredictable performance and sketchy durability that EBS provides, so we’ve never taken the plunge. Everyone (well, except for a few notable exceptions) knows that you need to use some level of RAID across EBS volumes if you want some reasonable level of durability (just like you would with any other storage device like a hard disk), but even so, EBS just hasn’t seemed like a good fit for us. Which also rules out their Relational Database Service (RDS) for us – since I believe RDS is, under the hood, EC2 instances runing MySQL on EBS. I’ll be the first to admit that EBS’ lack of predictable performance has been our primary reason for staying away, rather than durability, but a durability & availability has been a strong secondary consideration. Hard to advocate a “systems are disposable” strategy when they have such a vital dependency on another service. Clearly, at least to us, it’s not a perfect product for our use case.

Which brings us to fourth, we aren’t 100% cloud yet. We’re working as quickly as possible to get there, but the lack of a performant, predictable cloud database at our scale has kept us from going there 100%. As a result, the exact types of data that would have potentially been disabled by the EBS meltdown don’t actually live at AWS at all – it all still lives in our own datacenters, where we can provide predictable performance. This has its own downsides – we had two major outages ourselves this week (we lost a core router and its redundancy earlier, and a core master database server later). I wish I didn’t have to deal with routers or database hardware failures anymore, which is why we’re still marching towards the cloud.

Water On Fire©  2010 Colleen M. Griffith. All Rights Reserved.  This material may not be published, broadcast, modified, or redistributed in any way without written agreement with the creator.  This image is registered with the US Copyright Office.www.colleenmgriffith.comwww.facebook.com/colleen.griffithTo view my photography portfolio, click here:  www.colleenmgriffith.com/Galleries/Most-PopularDon MacAskill, COO and Co-Founder of Smugmug, published this photo in his April 2011 blog.  You can find his blog here:  https://don.blogs.smugmug.com/This photo was taken August 23, 2010 - it captures lava flowing from the Kilauea volcano on The Big Island of Hawaii. This was the main lava flow as it emptied into the ocean.  I was sitting in the bow of a boat, bobbing up and down in the ocean, when I captured this pic.  You can't really see the scale of the eruption in this shot, but about 15 feet of the "shore" is shown here.  You can see the steam created when the molten rock hit the ocean - it would sometimes obscure the lava flows - but it was like a dance, when the steam cloud would part for a few minutes allowing a glimpse of the lava beneath.

WHAT HAPPENED

So what did we see when AWS blew up? Honestly, not much. One of our Elastic Load Balancers (ELBs) on a non-critical service lost its mind and stopped behaving properly, especially with regards to communication with the affected AZs. We updated our own status board, and then I tried to work around the problem. We quickly discovered we could just launch another identical ELB, point it at the non-affected zones, and update our DNS. 5 minutes after we discovered this, DNS had propagated, and we were back in business. It’s interesting to note that the ELB itself was affected here – not the instances behind it. I don’t know much about how ELBs operate, but this leads me to believe that ELBs are constructed, like RDS, out of EC2 instances with EBS volumes. That seems like the most logical reason why an ELB would be affected by an EBS outage – but other things like network saturation, network component failures, split-brain, etc could easily cause it as well.

Probably the worst part about this whole thing is that the outage in question spread to more than one AZ. In theory, that’s not supposed to happen – I believe each AZ is totally isolated (physically in another building at the very least, if not on the other side of town), so there should be very few shared components. In practice, I’ve often wondered how AWS does capacity planning for total AZ failures. You could easily imagine peoples automated (and even non-automated) systems simply rapidly provisioning new capacity in another AZ if there’s a catastrophic even (like Terminators attacking your facility, say). And you could easily imagine that surge in capacity taking enough toll on one or more AZs to incapacitate them, even temporarily, which could cause a cascade effect. We’ll have to wait for the detailed post-mortem to see if something similar happened here, but I wouldn’t be surprised if a surge in EBS requests to a 2nd AZ had at least a deteriorating effect. Getting that capacity planning done just right is just another crazy difficult problem that I’m glad I don’t have to deal with for all of our AWS-powered services.

http://sreekanth.smugmug.com/Other/DailyPhotos/8242851_xEL2F#531020046_ctKmd

ADVICE

This stuff sounds super simple, but it’s really pretty important. If I were starting anew today, I’d absolutely build 100% cloud, and here’s the approach I’d take:

  • Spread across as many AZs as you can. Use all four. Don’t be like this guy and put all of the monitoring for your poor cardiac arrest patients in one AZ (!!).
  • If your stuff is truly mission critical (banking, government, health, serious money maker, etc), spread across as many Regions as you can. This is difficult, time consuming, and expensive – so it doesn’t make sense for most of us. But for some of us, it’s a requirement. This might not even be live – just for Disaster Recovery (DR)
  • Beyond mission critical? Spread across many providers. This is getting more and more difficult as AWS continues to put distance between themselves and their competitors, grow their platform and build services and interfaces that aren’t trivial to replicate, but if your stuff is that critical, you probably have the dough. Check out Eucalyptus and Rackspace Cloud for starters.
  • I should note that since spreading across multiple Regions and providers adds crazy amounts of extra complexity, and complex systems tend to be less stable, you could be shooting yourself in the foot unless you really know what you’re doing. Often redundancy has a serious cost – keep your eyes wide open.
  • Build for failure. Each component (EC2 instance, etc) should be able to die without affecting the whole system as much as possible. Your product or design may make that hard or impossible to do 100% – but I promise large portions of your system can be designed that way. Ideally, each portion of your system in a single AZ should be killable without long-term (data loss, prolonged outage, etc) side effects. One thing I mentally do sometimes is pretend that all my EC2 instances have to be Spot instances – someone else has their finger on the kill switch, not me. That’ll get you to build right. 🙂
  • Understand your components and how they fail. Use any component, such as EBS, only if you fully understand it. For mission-critical data using EBS, that means RAID1/5/6/10/etc locally, and some sort of replication or mirroring across AZs, with some sort of mechanism to get eventually consistent and/or re-instantiate after failure events. There’s a lot of work being done in modern scale-out databases, like Cassandra, for just this purpose. This is an area we’re still researching and experimenting in, but SimpleGeo didn’t seem affected and they use Cassandra on EC2 (and on EBS, as far as I know), so I’d say that’s one big vote.
  • Try to componentize your system. Why take the entire thing offline if only a small portion is affected? During the EBS meltdown, a tiny portion of our site (custom on-the-fly rendered photo sizes) was affected. We didn’t have to take the whole site offline, just that one component for a short period to repair it. This is a big area of investment at SmugMug right now, and we now have a number of individual systems that are independent enough from each other to sustain partial outages but keep service online. (Incidentally, it’s AWS that makes this much easier to implement)
  • Test your components. I regularly kill off stuff on EC2 just to see what’ll happen. I found and fixed a rare bug related to this over the weekend, actually, that’d been live and in production for quite some time. Verify your slick new eventually consistent datastore is actually eventually consistent. Ensure your amazing replicator will actually replicate correctly or allow you to rebuild in a timely fashion. Start by doing these tests during maintenance windows so you know how it works. Then, once your system seems stable enough, start surprising your Ops and Engineering teams by killing stuff in the middle of the day without warning them. They’ll love you.
  • Relax. Your stuff is gonna break, and you’re gonna have outages. If you did all of the above, your outages will be shorter, less damaging, and less frequent – but they’ll still happen. Gmail has outages, Facebook has outages, your bank’s website has outages. They all have a lot more time, money, and experience than you do and they’re offline or degraded fairly frequently, considering. Your customers will understand that things happen, especially if you can honestly tell them these are things you understand and actively spend time testing and implementing. Accidents happen, whether they’re in your car, your datacenter, or your cloud.

Best part? Most of that stuff isn’t difficult or expensive, in large part thanks to the on-demand pricing of cloud computing.

WHAT ABOUT AWS?

Amazon has some explaining to do about how this outage affected multiple AZs, no question. Even so, high volume sites like Netflix and SmugMug remained online, so there are clearly cloud strategies that worked. Many of the affected companies are probably taking good hard looks at their cloud architecture, as well they should. I know we are, even though we were minimally affected.

Still, SmugMug wouldn’t be where we are today without AWS. We had a monster outage (~8.5 hours of total downtime) with AWS a few years ago, where S3 went totally dark, but that’s been the only significant setback. Our datacenter related outages have all been far worse, for a wide range of reasons, as many of our loyal customers can attest. 😦 That’s one of the reasons we’re working so hard to get our remaining services out of our control and into Amazon’s – they’re still better at this than almost anyone else on earth.

Will we suffer outages in the future because of Amazon? Yes. I can guarantee it. Will we have fewer outages? Will we have less catastrophic outages? That’s my bet.

http://jossphoto.smugmug.com/Landscapes/Digital-Art-Outdoors/2636501_QmaFJ#140520059_wN6kq

THE CLOUD IS DEAD!

There’s a lot of noise on the net about how cloud computing is dead, stupid, flawed, makes no sense, is coming crashing down, etc. Anyone selling that stuff is simply trying to get page views and doesn’t know what on earth they’re talking about. Cloud computing is just a tool, like any other. Some companies, like Netflix and SimpleGeo, likely understand the tool better. It’s a new tool, so cut the companies that are still learning some slack.

Then send them to my blog. 🙂

Oh, and while you’re here, would you mind doing me a huge favor? If you use StackOverflow, ServerFault, or any other StackExchange sites – I could really use your help. Thanks!

And, of course, we’re always hiring. Come see what it’s like to love your job (especially if you’re into cloud computing).

UPDATE: Joe Stump is out with the best blog post about the outage yet, The Cloud is not a Silver Bullet, imho.

  1. April 25, 2011 at 12:08 am

    I loved it all, except at Rackspace we’ve made some HUGE bets on Open Source, which will help the rest of the industry beat, or at least keep up with, Amazon, and will help folks follow your advice in a way that won’t lock them into Rackspace. See http://openstack.org/ and we have some other stuff coming soon that I’d love to have you test that will eat at some of your love for Amazon (not all of it, I know that’s an impossible task for a company your size that’s already built around a single vendor).

  2. April 25, 2011 at 12:57 am

    Kudos for planning ahead and your smart architecture. And even more kudos for sharing your expertise with others so that they can leverage your experience.

  3. April 25, 2011 at 2:09 am

    wow amazing article with great insight. Thanks for sharing the secrets with a touch of humor and great photo to go along with 🙂

  4. April 25, 2011 at 2:32 am

    Hi Don

    Very nice article. The technology brought me here (I also work on a cloud platform) and then I saw that one of my pictures is published on the blog! Thanks a lot, that is an honor!

    Sreekanth

  5. April 25, 2011 at 3:23 am

    Don, I gotta believe the reason that you and Netflix stayed up through the outage is cause you didn’t use EBS. I’ve yet to hear of an Amazon customer that used EBS in the affected AZ’s that stayed up during the outage.

    • Mike Malone
      April 25, 2011 at 8:56 pm

      I don’t understand why everyone is assuming the only reason some folks made it through this is because they don’t use EBS. I’ll be the first to admit that at SimpleGeo we were rather lucky, we weren’t affected by the outage at all. But it’s not because we don’t use EBS. In fact, we use EBS all over the place. We just expect it to fail. In fact, we expect entire AZs to fail. If EBS had died in one of our AZs we would have dropped it from production infrastructure until it was repaired. We drop AZs all the time for all sorts of reasons, so it’s a pretty normal operating procedure for us at this point.

      As far as I can tell from the outage report (and I’m anxiously awaiting a full post-mortem) EBS never fully failed in more than one availability zone. I’m reading between the lines a bit, but it looks like one AZ blew up, and the others were degraded due to capacity issues and throughput issues with the AWS management backplane. Anyone who was running with an operational replica in another AZ could hot-failover and was fine. If you use ELB that’s a snap. It’s folks who had critical infrastructure on EBS in the failed AZ that got hit hardest. Some of them probably expected to have the ability to spin up new volumes from backup quickly. In retrospect it’s pretty clear why they couldn’t, but before last week I don’t think I would have predicted that either.

      The biggest wildcard to me are the multi-AZ RDS failover issues. This problem also seems to be EBS / management backplane related, and it’s certainly not what Amazon advertised. However, multi-homed RDMS is black magic. Multi-homed MySQL is particularly gnarly. But people told Amazon that’s what they wanted, and for better or worse Amazon listens to its customers. C’est la vie.

      There’s lots of room to share responsibility here, and I don’t want to antagonize anyone, but I’m finding a lot of FUD out there that’s not entirely fair to AWS. Just saying.

  6. April 25, 2011 at 6:24 am

    A couple of remarks/further suggestions:

    – add another layer of abstraction to the spreading across multiple, and as much as possible, AZs: spread to multiple cloud hosters. Amazon is by far the most extreme, but there are others providing an EC2/S3 compatible solution even (especially if a service is mission critical).

    – Everything behind the (redundant) load balancer at any particular data center can be setup redundantly, so that shouldn’t be an issue. I’m still thinking about a good solution for automatic failover between geographically separated load balancers. Load sharing on DNS level doesn’t prevent your clients noticing the down time, and the step from DNS lookup to first contact to load balancer seems to be the only step left that can’t have automatic failover (unless you hack your DNS server to have 1 minute TTLs and dynamically check load balancer availability and remove/add IPs accordingly, but then there’s still downtime). Wonder if http://en.wikipedia.org/wiki/Mobile_IP could be of help here, but probably the home agent would become the bottleneck then. Unless I’m overlooking something here..

    – Did you consider Riak for asset management? I’m considering it for the same purpose. According to its website, it “scales predictably”, and it is “A truly fault-tolerant system, Riak has no single point of failure. No machine is special or central in Riak”. It exposes a HTTP interface, so you could bind it directly to for example Amazon Cloudfront with custom origins.

  7. April 25, 2011 at 6:32 am

    Sreekanth Narayanan :

    Hi Don

    Very nice article. The technology brought me here (I also work on a cloud platform) and then I saw that one of my pictures is published on the blog! Thanks a lot, that is an honor!

    Sreekanth

    It’s a beautiful shot – thanks for shooting it and for being a customer! 🙂

  8. April 25, 2011 at 6:34 am

    Larry O’Brien :

    Don, I gotta believe the reason that you and Netflix stayed up through the outage is cause you didn’t use EBS. I’ve yet to hear of an Amazon customer that used EBS in the affected AZ’s that stayed up during the outage.

    I can’t speak for their architecture, but I’m 99% sure that SimpleGeo uses Cassandra on EBS across multiple AZs. They weren’t affected, as far as I could tell. I’m sure there were others, too, given how many customers Amazon has. It’d be interesting to hear from more survivors…

  9. April 25, 2011 at 7:42 am

    PBS (http://www.pbs.org/) was affected for a while primarily because we do use EBS-backed RDS databases. Despite being spread across multiple availability-zones, we weren’t easily able to launch new resources ANYWHERE in the East region since everyone else was trying to do the same. I ended up pushing the RDS stuff out West for the time being.

    I did a write-up of our own situation last week: http://bit.ly/gArqeU

  10. April 25, 2011 at 8:26 am

    Well done article. I feel a lot more confident in the status of Smugmug and the difference between the outages that happened last week. I thoroughly applaud the transparency and sharing of knowledge.

    At work I run our webservers on Rackspace and are happy with their support. I know you have some good links in there and would applaud balancing some of the load there. I just hope that they give you more data than they were able to provide me about cloud data durability. Yes, you got me thinking with your previous blog post about data durability.

  11. josh
    April 25, 2011 at 10:58 am

    um. your sites down now. What went wrong this time?

    • April 25, 2011 at 12:05 pm

      That was simply an embarrassing user error. Ugh. We’re doing some maintenance on a portion of our DB infrastructure at the moment. Fairly routine situation, and we have policies around how this happens so that once the work starts to happen, the databases in question are out of production and ready to go. The Ops guy working on this project simply forgot an important step in this process. Once we noticed (aka, once SmugMug stopped working), we fixed it within a matter of minutes. Embarrassing, especially since it was so easily avoidable, but not the end of the world.

  12. April 25, 2011 at 11:30 am

    Kudos for the excellent and detailed article. As a network engineer for a large organization – and a happy SmugMug member – I see many benefits, challenges, and risks in this new era of virtualization. Your willingness to share the experience with Amazon is highly admirable.

  13. jbm
    April 25, 2011 at 4:30 pm

    Great article! Thanks for sharing.

    Have you also encountered network failures using AWS? and how did your site keep up in that situation? e.g. what happens if the primary route between San Francisco and Seattle goes down?

  14. Tiago
    April 25, 2011 at 5:12 pm

    That’s an amazingly great post. Thanks Don!

  15. April 25, 2011 at 6:53 pm

    Hey Don – thx much for the details in the article. Very timely stuff – was wondering how much the outages were tied to what I was reading about at Amazon. And glad to hear of all the precautions you and the team have taken to minimize downtime. Quick question: I’m curious how you came across my photo that you published in your blog; I’m always curious to learn which marketing strategies are working vs. not-working. Take care, Colleen

  16. Brian Wong
    May 12, 2011 at 12:46 pm

    Excellent post as usual, Don. Thanks for sharing your wisdom and experience with the rest of us!

  1. April 25, 2011 at 3:11 am
  2. April 25, 2011 at 3:36 am
  3. April 25, 2011 at 5:14 am
  4. April 25, 2011 at 6:22 am
  5. April 25, 2011 at 7:05 am
  6. April 25, 2011 at 7:10 am
  7. April 25, 2011 at 7:18 am
  8. April 25, 2011 at 7:20 am
  9. April 25, 2011 at 7:27 am
  10. April 25, 2011 at 8:34 am
  11. April 25, 2011 at 9:31 am
  12. April 25, 2011 at 9:46 am
  13. April 25, 2011 at 10:37 am
  14. April 25, 2011 at 1:44 pm
  15. April 25, 2011 at 2:01 pm
  16. April 25, 2011 at 2:27 pm
  17. April 25, 2011 at 3:00 pm
  18. April 25, 2011 at 3:02 pm
  19. April 25, 2011 at 3:03 pm
  20. April 25, 2011 at 3:18 pm
  21. April 25, 2011 at 8:25 pm
  22. April 25, 2011 at 9:14 pm
  23. April 26, 2011 at 2:43 am
  24. April 26, 2011 at 3:30 am
  25. April 26, 2011 at 7:10 am
  26. April 26, 2011 at 9:59 am
  27. April 26, 2011 at 10:47 am
  28. April 26, 2011 at 11:17 am
  29. May 13, 2011 at 7:37 am
  30. May 13, 2011 at 1:32 pm
  31. May 19, 2011 at 3:18 am
  32. May 20, 2011 at 9:44 am
Comments are closed.