SkyNet Lives! (aka EC2 @ SmugMug)

Everyone knows that SmugMug is a heavy user of S3, storing well over half a petabyte of data (non-replicated) there. What you may not know is that EC2 provides a core part of our infrastructure, too. Thanks to Amazon, the software and hardware that processes all of your high-resolution photos and high-definition video is totally scalable without any human intervention. And when I say scalable, I mean both up and down, just the way it should be. Here’s our approach in a nutshell:
OVERVIEW
The architecture basically consists of three software components: the rendering workers, the batch queuing piece, and the controller. The rendering workers live on EC2, and both the queuing piece and the controller live at SmugMug. We don’t use SQS for our queuing mechanism for a few reasons:
- We’d already built a queuing mechanism years ago, and it hasn’t (yet?) hit any performance or reliability bottlenecks.
- SQS’s pricing used to be outta whack for what we needed. They’ve since dramatically lowered the pricing and it’s now much more in line with what we’d expect – but by then, we were done.
- The controller consumes historical data to make smart decisions, and our existing queuing system was slightly easier to generate the historical data from.
RENDER WORKERS
Our render workers are totally “dumb”. They’re literally bare-bones CentOS 5 AMIs (you can build your own, or use RightScale’s, or whatever you’d like) with a single extra script on them which is executed from /etc/rc.d/rc.local. What does that script do? It fetches intelligence. 🙂
When that script executes, it sends an authenticated request to get a software bundle, extracts the bundle, and starts the software inside. That’s it. Further, the software inside the bundle is self-aware and self-updating, too, automatically fetching updated software, terminating older versions, and relaunching itself. This makes it super-simple to push out new SmugMug software releases – no bundling up new AMIs and testing them or anything else that’s messy. Simply update the software bundle on our servers and all of the render workers automatically get the new release within seconds.
Of course, worker instances might have different roles or be assigned to work with different SmugMug clusters (test vs production, for example), so we have to be able to give it instructions at launch. We do this through the “user-data” launch parameter you can specify for EC2 instances – they give the software all the details needed to choose a role, get software, and launch it. Reading the user-data couldn’t be easier. If you haven’t done it before, just fetch http://169.254.169.254/latest/user-data from your running instance and parse it.
Once they’re up and running, they simply ping the queue service with a “Hi, I’m looking for work. Do you have any?” request, and the queue service either supplies them with work or gives them some other directive (shutdown, software update, take a short nap, etc). Once a job is done (or generated an error), the worker stores the work result on S3 and notifies the queue service that the job is done and asks for more work. Simple.
QUEUE SERVICE
This is your basic queuing service, probably very similar to any other queueing service you’ve seen before. Ours supports job types (new upload, rotate, watermark, etc) and priorities (Pros go to the head of the line, etc) as well as other details. Upon completion, it also logs historical data such as time to completion. It also supports time-based re-queueing in the event of a worker outage, miscommunication, error, or whatever. I haven’t taken a really hard look at SQS in quite some time, but I can’t imagine it would be very difficult to implement on SQS for those of you starting fresh.
CONTROLLER (aka SkyNet)
For me, this was the fun part. Initially we called it RubberBand, but we had an ususual partial outage one day which caused it to go berzerk and launch ~250 XL instances (~2000 normal EC2 instances) in a single call. Clearly, it had gained sentience and was trying to take over the world, so we renamed it SkyNet. (We’ve since corrected the problem, and given SkyNet more reasonable thresholds and limits. And yes, I caught it within the hour.).
SkyNet is completely autonomous – it operates with with zero human interaction, either watching or providing interactive guidance. No-one at SmugMug even pays attention to it anymore (and we haven’t for many months) since it operates so efficiently. (Yes, I realize that means it’s probably well on its way to world domination. Sorry in advance to everyone killed in the forthcoming man-machine war.)
Roughly once per minute, SkyNet makes an EC2 decision: launch instance(s), terminate instance(s), or sleep. It has a lot of inputs – it checks anywhere from 30-50 pieces of data to make an informed decision. One of the reasons for that is we have a variety of different jobs coming in, some of which (uploads) are semi-predictable. We know that lots of uploads come in every Sunday evening, for example, so we can begin our prediction model there. Other jobs, though, such as watermarking an entire gallery of 10,000 photos with a single click, aren’t predictable in a useful way, and we can only respond once the load hits the queue.
A few of the data points SkyNet looks at are:
- How many jobs are pending?
- What’s the priority of the jobs?
- What type of jobs are they?
- How complex are the pending jobs? (ex: HD video vs 1Mpix photo)
- How time-sensitive are the pending jobs? (ex: Uploads vs rotations)
- Current load of the EC2 cluster
- Current # of jobs per sample processed
- Average time per job per sample
- Historical load and job performance
- How close any instances are to the end of their 1-hour cost window
- Recent SkyNet actions (start/terminate/etc)
.. and the list goes on.
Our goal is to keep enough slack around to handle surges of unpredictable batch operations, but not enough so it drains our bank account. We’ve settled on an average of roughly 25% of excess compute capacity available when averaged over a full 24 hour period and SkyNet keeps us remarkably close to that number. We always err on the side of more excess (so we get faster processing times) rather than less when we have to make a decision. It’s great to save a few bucks here and there that we can plow back into better customer service or a new feature – but not if photo uploads aren’t processing, consistently, within 5-30 seconds of upload.

Our workers like lots of threads, so SkyNet does its best to launch c1.xlarge instances (Amazon calls these “High-CPU Instances“), but is smart enough to request equivalent other instance sizes (2 x Large, 8 x Small, etc) in the event it can’t allocate as many c1.xlarge instances as it would like. Our application doesn’t care how big/small the instances are, just that we get lots of CPU cores in aggregate. (We were in the Beta for the High-CPU feature, so we’ve been using it for months).
One interesting thing we had to take into account when writing SkyNet was the EC2 startup lag. Don’t get me wrong – I think EC2 starts up reasonably fast (~5 mins max, usually less), but when SkyNet is making a decision every minute, that means you could launch too many instances if you don’t take recent actions into account to cover startup lag (and, conversely, you need to start instances a little earlier than you might actually need them otherwise you get behind).
THE MONEY
SmugMug is a profitable business, and we like to keep it that way. The secrets to efficiently using EC2, at least in our use case, are as follows:
- Take advantage of the free S3 transfers. This is a biggy. Our workers get and put almost all of their bytes to/from S3.
- Make sure you have scaling down working as well as scaling up. At 3am on an average Wednesday morning, we have very few instances running.
- Use the new High-CPU Instances. Twice the CPU resources for the same $$ if you don’t need RAM.
- Amazon kindly gives you 30 days to monetize your AWS expenses. Use those 30 days wisely – generate revenues. 🙂
WHY NO WEB SERVERS?
I get asked this question a lot, and it really comes down to two issues, one major and one minor:
- No complete DB solution. SimpleDB is interesting, and the new EC2 Persistent Storage is too, but neither provides a complete solution for us. EC2 storage isn’t performant enough without some serious, painful partitioning to a finer grain than we do now – which comes with its own set of challenges, and SimpleDB both isn’t performant enough and doesn’t address all of our use cases. Since latency to our DBs matters a great deal to our web servers, this is a deal-killer – I can’t have EC2 web servers talking to DBs in my datacenters. (There are a few corner cases we’re exploring where we probably can, but they’re the exception – not the rule).
- No load balancing API. They’ve got an IP address solution in the form of Elastic IPs, which is awesome and major step forward, but they don’t have a simple Load Balancer API that I can throw my web boxes behind. Yes, I realize I can manually do it using EC2 instances, but that’s more fragile and difficult (and has unknown scaling properties at our scale). If the DB issue were solved, I’d probably dig into this and figure out how to do it ourselves – but since it’s not, I can keep asking for this in the meantime.
Let me be very clear here: I really don’t want to operate datacenters anymore despite the fact that we’re pretty good at it. It’s a necessary evil because we’re an Internet company, but our mission is to be the best photo sharing site. We’d rather spend our time giving our customers great service and writing great software rather than managing physical hardware. I’d rather have my awesome Ops team interacting with software remotely for 100% of their duties (and mostly just watching software like SkyNet do its thing). We’ll get there – I’m confident of that – we’re just not there yet.
Until then, we’ll remain a hybrid approach.
Fantastic post. More of these please.
Have you seen Engine Yard’s “Vertebra” yet?
http://brainspl.at/articles/2008/06/02/introducing-vertebra
Looks like something you guys might be interested in.
@david
I’ll try. I have a business to run and three small kinds to take care of. 🙂
@BJ Clark
I hadn’t, but now I have. Looks interesting!
I love your EC2 posts. It is quite clear you have done a lot of testing and will be taking a lot of the advice if I ever have to scale my startup 🙂
-Jeff
A very interesting post! SkyNet sounds super awesome.
I was curious how SkyNet uses the inputs (pending jobs, priorities, etc) to predict the number of required EC2 instances. Is is some simple model you’ve come up with based on past experience? Or does it use fancy machine-learning techniques?
Keep it up! SmugMug rocks 🙂
@Jeff O’Hara
Thanks!
@jack
I wish I was smart enough to build machine-learning techniques into SkyNet, but I’m not. Probably better for all mankind that I’m not. 🙂 It just uses reasonable thresholds based on some initial tweaking during its first week or two of running. So far, so good.
SM won’t even do sub directories.
Subdirectories are kind of a dinosaur. There are sites that do them (some well, some not so well), but tagging is really the way to go with pictures since any given picture will usually fall into multiple categories. Besides, this is hardly the place to complain about such a feature when Don is being so giving with his experiences.
@Ian Yates
Sure we do. We just limit your sub-directory depth. We’re open to making the limit larger (or unlimited), just let us know what you’d like to see. It’s certainly not a technical limitation – it’s a usability compromise.
Wondering how Google App Engine’s BigTable is going to duke it out with SimpleDB.
What do you guys prefer to write SkyNet in, java, php, perl, python?
Great minds think alike.
We use a similar framework at Spinn3r and have for the last two years or more.
We have our own queue instance which gives out jobs to a farm of machines that execute them.
We’re mostly IO bound so moving these tasks into EC2 would cost a ton of cash.
That and we’re really not very elastic. Our load is almost always the same.
At some point we’ll probably open source our stuff.
Having our own queue turns out to be a big win because we can scale it and have some custom performance enhancements there.
BTW. We’re hiring for a Senior Systems Engineer to help out with our cluster. Anyone have any leads. When we hire someone I’ll have more time to make sure this stuff is open sourced 😉
Kevin
Very well written article .. ! Informative +several million ..
Pretty awesome stuff, Don. Mixbook uses EC2 for image processing (and S3 for image storage) just like you guys, but we have not created any sentient super beings to take over the world quite yet. 🙂
“…caused it to go berzerk and launch ~250 XL instances (~2000 normal EC2 instances) in a single call. Clearly, it had gained sentience and was trying to take over the world, so we renamed it SkyNet…” — Hahahaha.
Don, very nice write up on a cool system. It’s actually very similar to what we do with RightGrid which a number of our customers are using to scale up&down based on the size of an SQS queue or the age of the items at the head of the queue. BTW, I bet that when the storage volumes become a reality on EC2 you’ll love them. Makes database management really flexible. Of course we’d love to work with you on moving over 😉 Cheers & best wishes for good profits!
So, when should we save all of our uploads for, so we can help slim down that idle percentage? 🙂
@Chris: The answer’s all ready in the post:
“At 3am on an average Wednesday morning, we have very few instances running.”
🙂
Thanks for the behind the scenes post Don. Always great to know a bit more about the tech behind the coolest looking photo host out there.
Great Article 🙂
I was waiting for it since January! (http://blogs.smugmug.com/don/2007/12/14/amazon-announces-simpledb-in-beta/#comment-97454)
Just out of curiosity, those numbers (day chart) are instances or CPU cores?
Awesome post Don!
We are doing almost the exactly same thing for processing mobile web site analytics. I love when I launch 10 instances of our coding servers 4 times a day that terminate themselves when done. Total cost $4.00
We also have a similar issue when it comes to the actual database. Ultimately the data ends up in our SQL Servers and the dashboard web servers are on physical machines. The nice thing is that the web servers that get millions of page hits per day are in the cloud. If a custom of ours gets unexpected traffic, more instances are launched. All the logs are processed in the cloud and we use S3 for all storage. By the time we transfer the data to our database, the data is 1/10th the size.
We also created a generic “application server” that downloads the actual apps depending on the parameters it is launched with.
I look forward to reading more about your company and how you use Ec2.
Greg Harris
http://www.mobilytics.net
Don – one thing I noticed in re-reading your old S3 post is the thing that has kept me from implementing S3. The price after almost 2 years in service is still .15/GB. They have cut the bandwidth cost in half which is nice, but they added transactional costs when they made that move which didn’t exist before. I am currently building 22TB (usable space on RAID6) servers with Windows server for under $12K. My price was twice that when I first evaluated S3. Granted, I still have to provide datacenter space for it, but if I can’t see the price realistically coming down over time, how can I project future costs in a way that will make it look like a good deal.
Is this the same skynet found here http://skynet.rubyforge.org/?
@Jorge Oliviera: Neither. That’s just an internal number that’s not really useful outside of SkyNet.
@Brad: I think you’d be hard pressed to make 3 durable replicated copies that are geographically dispersed for less than $0.15/GB over say 12 months. (Absolutely you can do it if you project 5-10 years out, but in theory S3 will have gotten cheaper during that same window…). Having said that, Amazon’s $0.15/GB price is on a collision course with falling HDD prices. I predict they will collide before the end of 2008, which means I also predict an Amazon price reduction before then.
@Andy: Nope, but that sure looks interesting. 🙂
hi there,
not specifically related to this article but would be great to know how you got on intially loading a great deal of data onto S3, methods, issues, advice etc…
Thanks!
Take advantage of the free S3 transfers. This is a biggy. Our workers get and put almost all of their bytes to/from S3.
Thats indeed a biggy … i am designing an application that needs to process loads of pdfs, so EC2/S3 looks perfect for that. But most of our audience is in europe and for us S3-EU is like 10 times faster than S3-US. I just wish amazon would make EC2 available in the european datacenters too, so we could benefit from the free EC2-S3 traffic.
Is open sourcing Skynet at any point in your future plans? It looks like a really cool piece of code.
For your question “Why no webservers?” – there are several more specialized clouds from vendors other than EC2 which are better tailored for web hosting. For example, in the US: MediaTemple, Mosso, GoGrid and in the UK: ElasticHosts, FlexiScale.
What I still wonder, what is the driving technology behind smugmug? Is it php only? Java based? Something different?
Don,
How are you dealing with eventual consistency delays? We’re seeing several minute delays between injecting a key and being able to get it. I realized there’d be some delay…but minutes make things more interesting.
All your Amazon posts have been great, and have helped us model our own approach from your experience. The one question I have is how you handle backing-up S3? I think in one of your previous posts I read that you use them as your primary storage, but haven’t read anything about backups. I’m trying to formulate our own backup strategy, but I’d be interested in any ideas or experiences you could share.