Archive

Archive for the ‘cloud computing’ Category

Best AWS outage post yet

April 26, 2011 1 comment

From The Cloud is not a Silver Bullet by Joe Stump at SimpleGeo:

…what is so shocking about this banter is that startups around the globe were essentially blaming a hard drive manufacturer for taking down their sites. I don’t believe I’ve ever heard of a startup blaming NetApp or Seagate for an outage in their hosted environments. People building on the cloud shouldn’t get a pass for poor architectural decisions that put too much emphasis on, essentially, network attached RAID1 storage saving their asses in an outage.

Go read the rest, it’s great. Better than mine.

How SmugMug survived the Amazonpocalypse

April 24, 2011 50 comments

tl;dr: Amazon had a major outage last week, which took down some popular websites. Despite using a lot of Amazon services, SmugMug didn’t go down because we spread across availability zones and designed for failure to begin with, among other things.

We’ve known for quite some time that SkyNet was going to achieve sentience and attack us on April 21st, 2011. What we didn’t know is that Amazon’s Web Services platform (AWS) was going to be their first target, and that the attack would render many popular websites inoperable while Amazon battled the Terminators.

Sorry about that, that was probably our fault for deploying SkyNet there in the first place.

We’ve been getting a lot of questions about how we survived (SmugMug was minimally impacted, and all major services remained online during the AWS outage) and what we think of the whole situation. So here goes.

http://jossphoto.smugmug.com/People/People-Digital-Art/2706381_EUKLw#209083325_MaNtP

HOW WE DID IT

We’re heavy AWS users with many petabytes of storage in their Simple Storage Service (S3) and lots of Elastic Compute Cloud (EC2) instances, load balancers, etc. If you’ve ever visited a SmugMug page or seen a photo or video embedded somewhere on the web (and you probably have), you’ve interacted with our AWS-powered services. Without AWS, we wouldn’t be where we are today – outages or not. We’re still very excited about AWS even after last week’s meltdown.

I wish I could say we had some sort of magic bullet that helped us stay alive. I’d certainly share if it I had one. In reality, our stability during this outage stemmed from four simple things:

First, all of our services in AWS are spread across multiple Availability Zones (AZs). We’d use 4 if we could, but one of our AZs is capacity constrained, so we’re mostly spread across three. (I say “one of our” because your “us-east-1b” is likely different from my “us-east-1b” – every customer is assigned to different AZs and the names don’t match up). When one AZ has a hiccup, we simple use the other AZs. Often this is a graceful, but there can be hiccups – there are certainly tradeoffs.

Second, we designed for failure from day one. Any of our instances, or any group of instances in an AZ, can be “shot in the head” and our system will recover (with some caveats – but they’re known, understood, and tested). I wish we could say this about some of our services in our own datacenter, but we’ve learned from our earlier mistakes and made sure that every piece we’ve deployed to AWS is designed to fail and recover.

Third, we don’t use Elastic Block Storage (EBS), which is the main component that failed last week. We’ve never felt comfortable with the unpredictable performance and sketchy durability that EBS provides, so we’ve never taken the plunge. Everyone (well, except for a few notable exceptions) knows that you need to use some level of RAID across EBS volumes if you want some reasonable level of durability (just like you would with any other storage device like a hard disk), but even so, EBS just hasn’t seemed like a good fit for us. Which also rules out their Relational Database Service (RDS) for us – since I believe RDS is, under the hood, EC2 instances runing MySQL on EBS. I’ll be the first to admit that EBS’ lack of predictable performance has been our primary reason for staying away, rather than durability, but a durability & availability has been a strong secondary consideration. Hard to advocate a “systems are disposable” strategy when they have such a vital dependency on another service. Clearly, at least to us, it’s not a perfect product for our use case.

Which brings us to fourth, we aren’t 100% cloud yet. We’re working as quickly as possible to get there, but the lack of a performant, predictable cloud database at our scale has kept us from going there 100%. As a result, the exact types of data that would have potentially been disabled by the EBS meltdown don’t actually live at AWS at all – it all still lives in our own datacenters, where we can provide predictable performance. This has its own downsides – we had two major outages ourselves this week (we lost a core router and its redundancy earlier, and a core master database server later). I wish I didn’t have to deal with routers or database hardware failures anymore, which is why we’re still marching towards the cloud.

Water On Fire©  2010 Colleen M. Griffith. All Rights Reserved. No usage rights without written agreement with the creator.www.colleenmgriffith.comwww.facebook.com/colleen.griffithTo view my photography portfolio, click here:  www.colleenmgriffith.com/Galleries/Most-PopularDon MacAskill, COO and Co-Founder of Smugmug, published this photo in his April 2011 blog.  You can find his blog here:  https://don.blogs.smugmug.com/This photo was taken August 23, 2010 - it captures lava flowing from the Kilauea volcano on The Big Island of Hawaii. This was the main lava flow as it emptied into the ocean.  I was sitting in the bow of a boat, bobbing up and down in the ocean, when I captured this pic.  You can't really see the scale of the eruption in this shot, but about 15 feet of the "shore" is shown here.  You can see the steam created when the molten rock hit the ocean - it would sometimes obscure the lava flows - but it was like a dance, when the steam cloud would part for a few minutes allowing a glimpse of the lava beneath.

WHAT HAPPENED

So what did we see when AWS blew up? Honestly, not much. One of our Elastic Load Balancers (ELBs) on a non-critical service lost its mind and stopped behaving properly, especially with regards to communication with the affected AZs. We updated our own status board, and then I tried to work around the problem. We quickly discovered we could just launch another identical ELB, point it at the non-affected zones, and update our DNS. 5 minutes after we discovered this, DNS had propagated, and we were back in business. It’s interesting to note that the ELB itself was affected here – not the instances behind it. I don’t know much about how ELBs operate, but this leads me to believe that ELBs are constructed, like RDS, out of EC2 instances with EBS volumes. That seems like the most logical reason why an ELB would be affected by an EBS outage – but other things like network saturation, network component failures, split-brain, etc could easily cause it as well.

Probably the worst part about this whole thing is that the outage in question spread to more than one AZ. In theory, that’s not supposed to happen – I believe each AZ is totally isolated (physically in another building at the very least, if not on the other side of town), so there should be very few shared components. In practice, I’ve often wondered how AWS does capacity planning for total AZ failures. You could easily imagine peoples automated (and even non-automated) systems simply rapidly provisioning new capacity in another AZ if there’s a catastrophic even (like Terminators attacking your facility, say). And you could easily imagine that surge in capacity taking enough toll on one or more AZs to incapacitate them, even temporarily, which could cause a cascade effect. We’ll have to wait for the detailed post-mortem to see if something similar happened here, but I wouldn’t be surprised if a surge in EBS requests to a 2nd AZ had at least a deteriorating effect. Getting that capacity planning done just right is just another crazy difficult problem that I’m glad I don’t have to deal with for all of our AWS-powered services.

30 May 2009We have been having pretty colorless sunsets off late due to the cloud cover every evening - one of the better ones have a good weekend !!!25 Apr 2011=========This picture has been published on Don Macaskill's (Smugmug CEO) blog.https://don.blogs.smugmug.com/2011/04/24/how-smugmug-survived-the-amazonpocalypse/I'm Honored ! thanks Smugmug !

ADVICE

This stuff sounds super simple, but it’s really pretty important. If I were starting anew today, I’d absolutely build 100% cloud, and here’s the approach I’d take:

  • Spread across as many AZs as you can. Use all four. Don’t be like this guy and put all of the monitoring for your poor cardiac arrest patients in one AZ (!!).
  • If your stuff is truly mission critical (banking, government, health, serious money maker, etc), spread across as many Regions as you can. This is difficult, time consuming, and expensive – so it doesn’t make sense for most of us. But for some of us, it’s a requirement. This might not even be live – just for Disaster Recovery (DR)
  • Beyond mission critical? Spread across many providers. This is getting more and more difficult as AWS continues to put distance between themselves and their competitors, grow their platform and build services and interfaces that aren’t trivial to replicate, but if your stuff is that critical, you probably have the dough. Check out Eucalyptus and Rackspace Cloud for starters.
  • I should note that since spreading across multiple Regions and providers adds crazy amounts of extra complexity, and complex systems tend to be less stable, you could be shooting yourself in the foot unless you really know what you’re doing. Often redundancy has a serious cost – keep your eyes wide open.
  • Build for failure. Each component (EC2 instance, etc) should be able to die without affecting the whole system as much as possible. Your product or design may make that hard or impossible to do 100% – but I promise large portions of your system can be designed that way. Ideally, each portion of your system in a single AZ should be killable without long-term (data loss, prolonged outage, etc) side effects. One thing I mentally do sometimes is pretend that all my EC2 instances have to be Spot instances – someone else has their finger on the kill switch, not me. That’ll get you to build right. 🙂
  • Understand your components and how they fail. Use any component, such as EBS, only if you fully understand it. For mission-critical data using EBS, that means RAID1/5/6/10/etc locally, and some sort of replication or mirroring across AZs, with some sort of mechanism to get eventually consistent and/or re-instantiate after failure events. There’s a lot of work being done in modern scale-out databases, like Cassandra, for just this purpose. This is an area we’re still researching and experimenting in, but SimpleGeo didn’t seem affected and they use Cassandra on EC2 (and on EBS, as far as I know), so I’d say that’s one big vote.
  • Try to componentize your system. Why take the entire thing offline if only a small portion is affected? During the EBS meltdown, a tiny portion of our site (custom on-the-fly rendered photo sizes) was affected. We didn’t have to take the whole site offline, just that one component for a short period to repair it. This is a big area of investment at SmugMug right now, and we now have a number of individual systems that are independent enough from each other to sustain partial outages but keep service online. (Incidentally, it’s AWS that makes this much easier to implement)
  • Test your components. I regularly kill off stuff on EC2 just to see what’ll happen. I found and fixed a rare bug related to this over the weekend, actually, that’d been live and in production for quite some time. Verify your slick new eventually consistent datastore is actually eventually consistent. Ensure your amazing replicator will actually replicate correctly or allow you to rebuild in a timely fashion. Start by doing these tests during maintenance windows so you know how it works. Then, once your system seems stable enough, start surprising your Ops and Engineering teams by killing stuff in the middle of the day without warning them. They’ll love you.
  • Relax. Your stuff is gonna break, and you’re gonna have outages. If you did all of the above, your outages will be shorter, less damaging, and less frequent – but they’ll still happen. Gmail has outages, Facebook has outages, your bank’s website has outages. They all have a lot more time, money, and experience than you do and they’re offline or degraded fairly frequently, considering. Your customers will understand that things happen, especially if you can honestly tell them these are things you understand and actively spend time testing and implementing. Accidents happen, whether they’re in your car, your datacenter, or your cloud.

Best part? Most of that stuff isn’t difficult or expensive, in large part thanks to the on-demand pricing of cloud computing.

Clouds and windmill in full sun.

WHAT ABOUT AWS?

Amazon has some explaining to do about how this outage affected multiple AZs, no question. Even so, high volume sites like Netflix and SmugMug remained online, so there are clearly cloud strategies that worked. Many of the affected companies are probably taking good hard looks at their cloud architecture, as well they should. I know we are, even though we were minimally affected.

Still, SmugMug wouldn’t be where we are today without AWS. We had a monster outage (~8.5 hours of total downtime) with AWS a few years ago, where S3 went totally dark, but that’s been the only significant setback. Our datacenter related outages have all been far worse, for a wide range of reasons, as many of our loyal customers can attest. 😦 That’s one of the reasons we’re working so hard to get our remaining services out of our control and into Amazon’s – they’re still better at this than almost anyone else on earth.

Will we suffer outages in the future because of Amazon? Yes. I can guarantee it. Will we have fewer outages? Will we have less catastrophic outages? That’s my bet.

http://jossphoto.smugmug.com/Landscapes/Digital-Art-Outdoors/2636501_QmaFJ#140520059_wN6kq

THE CLOUD IS DEAD!

There’s a lot of noise on the net about how cloud computing is dead, stupid, flawed, makes no sense, is coming crashing down, etc. Anyone selling that stuff is simply trying to get page views and doesn’t know what on earth they’re talking about. Cloud computing is just a tool, like any other. Some companies, like Netflix and SimpleGeo, likely understand the tool better. It’s a new tool, so cut the companies that are still learning some slack.

Then send them to my blog. 🙂

Oh, and while you’re here, would you mind doing me a huge favor? If you use StackOverflow, ServerFault, or any other StackExchange sites – I could really use your help. Thanks!

And, of course, we’re always hiring. Come see what it’s like to love your job (especially if you’re into cloud computing).

UPDATE: Joe Stump is out with the best blog post about the outage yet, The Cloud is not a Silver Bullet, imho.

On Why Auto-Scaling in the Cloud Rocks

December 9, 2008 70 comments
SkyNet Lives - EC2 at SmugMug

In high school, I had a great programmable calculator. I’d program it to solve complicated math and science problems “automatically” for me. Most of my teachers got upset if they found out, but I’ll always remember one especially enlightened teacher who didn’t. He said something to the effect of “Hey, if you managed to write software to solve the equation, you must thoroughly understand the problem. Way to go!”.

George Reese wrote up a blog post over at O’Reilly the other day called On Why I Don’t Like Auto-Scaling in the Cloud. His main argument seems to be that auto-scaling is bad and reflects poor capacity planning. In the comments, he specifically calls SmugMug out, saying we’re “using auto-scaling as a crutch for poor or non-existent capacity planning”.

George is like one of those math teachers who doesn’t “get it”. I was tempted not to write this post because he gets it so wrong, I’d hate to spread that meme. SkyNet auto-scales well. No humans at SmugMug are monitoring it and it just hums along, doing its job. Why is it so efficient? Because I understand the equation. I know what metrics drive our capacity planning and I programmed SkyNet to take these into account. It checks an awful lot of data points every minute or so – this isn’t simply “oh, we have idle CPU, let’s kill some instances.” (I would argue that, depending on the application, simple auto-scaling based on CPU usage or similar data point can be very effective, too, though).

SkyNet has been in production for over a year with only two incidents of note and SmugMug has more than doubled in size and capacity during that time without adding any new operations people. How on earth is this a bad thing?

I'm *not* speaking at Cloud Computing Expo 2008

November 17, 2008 11 comments

Just a quick update, I was invited to speak at Sys-Con’s Cloud Computing Expo 2008 West (how’s that for a mouthful?) and accepted, planning on talking about SkyNet, S3, and our future use of cloud computing. Alas, my Inbox is so crazy, I failed to see the handful of emails the conference sent me asking me to sign a contract of some type. So I missed the deadline and they canceled my spot. (BTW, I can’t recall a conference ever asking me to sign something to speak, but this one does and I was full of FAIL.) 😦

So, I’m sorry, despite being listed on the program, I’m *not* speaking there this week. It was my bad – I just missed the emails (as I miss so many emails these days). But still, a phone call from them wouldn’t have hurt, would it?

Who knows if I’ll be on their invitation list next year, but the conference will be great anyway, so have a great time without me!

Huge EC2 release: Load Balancing & Auto-Scaling!

October 27, 2008 7 comments
June 5th, 2008 near Maryville, Missouri

June 5th, 2008 near Maryville, Missouri by Shane Kirk

In case you didn’t see it, Amazon had a huge EC2 announcement the other day that included:

  • EC2 is now out of beta.
  • EC2 has a SLA!
  • Windows is now availabled on EC2
  • SQL Server is now available on EC2

But the really cool bits, if you ask me, are the announcements about the next wave of related services:

  • Monitoring
  • Load Balancing
  • Auto-Scaling
  • A web-based management console

As frequent readers of my blog and/or conference talks will know, this means one of the last important building blocks to creating fully cloud-hosted applications *at scale* is nearly ready for primetime.

For those keeping score at home, my personal checklist shows that the only thing now missing is a truly scalable, truly bottomless database-like data store. Neither Elastic Block Storage (EBS) nor SimpleDB really solve the entire scope of the problem, though they’re great building blocks that do solve big pieces (or everything, at smaller scale). I’m positive that someone (Amazon or other) will solve this problem and I can start moving more stuff “to the Cloud”.

I can’t wait.

Live-tweeting Cloud keynote at PDC 2008

October 27, 2008 1 comment
UFO OR CLOUD?

UFO OR CLOUD? by Shane Kirk

Microsoft is announcing some exciting Cloud Computing stuff today at their Professional Developers Conference (PDC). Assuming it’s the same stuff (and more?) I’ve been briefed on over the last year, it’s pretty exciting stuff.

I’ll be live-tweeting the best bits over on my Twitter account. If this stuff is interesting to you, come check it out.

%d bloggers like this: