Archive

Archive for June, 2008

SmugVault – Store everything for next to nothing.

June 23, 2008 36 comments
SmugVault

SmugMug has always allowed everyone to upload an unlimited number of web-displayable files – JPEG, GIF, PNG, and MP4 – but to date we haven’t been able to accept the RAW files generated by modern digital cameras. For years our customers have been asking, begging, and pleading for us to let them upload their priceless archives.  I’m happy to announce that day has come!

SmugVault is a new SmugMug product that lets you upload all the RAW, PSD, BMP, and TIFF files you’d like.  And not just those – we’ll accept XMP sidecars, PDF files, Word documents, Excel spreadsheets, video archives, and anything else you might want to store with your photos.  What’s more, we’ll bundle your files together for easy, intuitive browsing and safe retrieval.

Thanks to an innovative new product from Amazon Web Services, DevPay, you only pay pennies per GB for the storage you actually use each month.  There’s no huge fee with a maximum storage amount – it’s truly unlimited and pay-by-the-drink.  Store one megabyte or one billion megabytes – we don’t care.  Whatever works best for your workflow and archival needs, SmugVault can handle it.

SmugVault

photo by: Andy Williams

Compose a beautiful panorama out of 20 RAW files?  No problem – upload your final JPEG and bundle all 20 RAW files with it, along with your Photoshop PSD containing all your layers and edits and the XMP sidecar detailing the Adobe Lightroom changes you made during the editing process.  You’ll see just the single perfect photo on your SmugMug site, but with a single click, you have access to every component you’ve associated with it.

SmugVault

Don’t want to upload final corrected JPEGs for all the RAWs you shot at that huge event, but still want them stored somewhere safe and sound?  No problem.  Just upload the RAWs straight off your camera and we’ll store them for safe retrieval.  Want us to generate JPEG previews of those uncorrected RAW files so you can browse your SmugVault visually to find that perfect shot?  We’ll do that too.

Loving SmugMug’s new HD video features, but wishing you had somewhere safe to archive the original footage rather than the web-friendly lower bitrate copies?  Not a problem.  Just add them to your SmugVault.

Unfortunately, we hear about people losing their priceless memories to hurricanes, fire, and computer failure almost every day.  We’ve always been glad we can simply help them get the JPEGs back – remember, your photos are yours, not ours – and I’m even more excited that we can now help everyone recover their priceless archives too!

Read more: Release Notes | Pricing | Help | Wiki FAQ

Categories: Uncategorized

Speaking at Velocity next week

June 19, 2008 6 comments
Photo by Andrew Tobin

photo by: Andrew Tobin

I’m thrilled that O’Reilly is putting on a great performance and operations conference, so I’m especially happy to be speaking there. I’m on a great-sounding panel, Success: A Survival Guide. I’m sure you’ll hear about our first few years where, like clockwork on the same days each year, we got massively hammered with traffic and what we did to handle it.

If you’re going to the conference, come say “Hi!”. I’ll be wearing a red SmugMug hat, as always. 🙂

Oh, and if you haven’t signed up yet, use ‘vel08js’ to get a nice discount. 🙂

Vote SmugMug at LifeHacker!

June 12, 2008 3 comments
SmugMug in LifeHacker's Best Photo Sharing Web Sites

I’m really honored that SmugMug made LifeHacker’s Five Best Photo Sharing Web Sites.

They have voting open to pick the best – go vote for your favorite!

SmugMug loves OAuth

June 11, 2008 10 comments
Caitlin Ann Parry

SmugMug’s API now supports OAuth! We actually rolled out support a few weeks ago, but our documentation has turned into such a mess, I delayed announcing it. Finally, though, I just couldn’t keep quiet – I’m so excited I just had to tell someone!

So I’m sorry the docs are all messed up – they’re in multiple locations and out of date. We’ve been working hard on re-writing them to make them easier to understand and more clear but we’re not quite done yet. David, though, has a great excuse for why we’re behind – you’re looking at her! His beautiful daughter, Caitlin Ann, was born at roughly the same time as our OAuth support shipped. He’s had his hands full. 🙂

So go read the new docs on OAuth, the old docs on the rest of the API, and the dgrin API forum so you can get cracking on your own OAuth services and apps. Hopefully lots of the 1200+ apps our awesome developers have already created will adopt it quickly.

For those who don’t know, OAuth is an open standard for secure authentication. It allows applications and services to authenticate to SmugMug and other OAuth-enabled APIs without needed to know or store the users’ sensitive login and password information. I imagine at some point OAuth might become the *only* way to authenticate to our API, so I’d at least start playing with it now.

Your photos are yours, not ours – long live open standards and data portability!

UPDATE: I should have noted that this is totally useable now, you don’t have to wait for the docs update. It’s just mildly painful to go between a few different locations to find all the documentation. This is on a new Beta API branch, 1.2.2, so you’ll need to use 1.2.2 endpoints.

iPhone SDK, NDA, and SmugMug

June 9, 2008 8 comments
SmugMug on iPhone

Getting lots of requests about an iPhone app for SmugMug. As you no-doubt know, we’re enormous Apple fans over here, and iPhone fans in particular. Most of the company camped out in line at the Palo Alto store (see stories here and here), we were the first photo sharing app with an iPhone optimized interface (and one of the first web apps anywhere), and we designed our awesome new video sharing service with iPhone in mind. So I think it’s no secret that we’d love to have rich, intuitive native iPhone applications for ourselves and our customers.

However, the iPhone SDK NDA is still in effect, so I can neither confirm nor deny that we have an iPhone app in the works, or even whether we’ve been accepted into the iPhone SDK program. I have no idea why so many companies have chosen to break the NDA and talk about their apps today, but that’s just not the way we roll around here – we like to maintain a great relationship with any partner companies, and Apple is a company we’re especially fond of. (Ok, ok, so I’m a fanboy 🙂 )

If / when we get to build an iPhone app or two, we’ll do our absolute best to make sure they’re intuitive to use and takes advantage of all the power the iPhone provides. As you can imagine, we’re especially excited about iPhone 3G. 🙂

(Thank goodness Michael Arrington stole the wrong iPhone from me this morning. Whew! 🙂 )

SkyNet Lives! (aka EC2 @ SmugMug)

June 3, 2008 53 comments
SkyNet Lives - EC2 at SmugMug

Everyone knows that SmugMug is a heavy user of S3, storing well over half a petabyte of data (non-replicated) there. What you may not know is that EC2 provides a core part of our infrastructure, too. Thanks to Amazon, the software and hardware that processes all of your high-resolution photos and high-definition video is totally scalable without any human intervention. And when I say scalable, I mean both up and down, just the way it should be. Here’s our approach in a nutshell:

OVERVIEW

The architecture basically consists of three software components: the rendering workers, the batch queuing piece, and the controller. The rendering workers live on EC2, and both the queuing piece and the controller live at SmugMug. We don’t use SQS for our queuing mechanism for a few reasons:

  • We’d already built a queuing mechanism years ago, and it hasn’t (yet?) hit any performance or reliability bottlenecks.
  • SQS’s pricing used to be outta whack for what we needed. They’ve since dramatically lowered the pricing and it’s now much more in line with what we’d expect – but by then, we were done.
  • The controller consumes historical data to make smart decisions, and our existing queuing system was slightly easier to generate the historical data from.

RENDER WORKERS

Our render workers are totally “dumb”. They’re literally bare-bones CentOS 5 AMIs (you can build your own, or use RightScale’s, or whatever you’d like) with a single extra script on them which is executed from /etc/rc.d/rc.local. What does that script do? It fetches intelligence. 🙂

When that script executes, it sends an authenticated request to get a software bundle, extracts the bundle, and starts the software inside. That’s it. Further, the software inside the bundle is self-aware and self-updating, too, automatically fetching updated software, terminating older versions, and relaunching itself. This makes it super-simple to push out new SmugMug software releases – no bundling up new AMIs and testing them or anything else that’s messy. Simply update the software bundle on our servers and all of the render workers automatically get the new release within seconds.

Of course, worker instances might have different roles or be assigned to work with different SmugMug clusters (test vs production, for example), so we have to be able to give it instructions at launch. We do this through the “user-data” launch parameter you can specify for EC2 instances – they give the software all the details needed to choose a role, get software, and launch it. Reading the user-data couldn’t be easier. If you haven’t done it before, just fetch http://169.254.169.254/latest/user-data from your running instance and parse it.

Once they’re up and running, they simply ping the queue service with a “Hi, I’m looking for work. Do you have any?” request, and the queue service either supplies them with work or gives them some other directive (shutdown, software update, take a short nap, etc). Once a job is done (or generated an error), the worker stores the work result on S3 and notifies the queue service that the job is done and asks for more work. Simple.

QUEUE SERVICE

This is your basic queuing service, probably very similar to any other queueing service you’ve seen before. Ours supports job types (new upload, rotate, watermark, etc) and priorities (Pros go to the head of the line, etc) as well as other details. Upon completion, it also logs historical data such as time to completion. It also supports time-based re-queueing in the event of a worker outage, miscommunication, error, or whatever. I haven’t taken a really hard look at SQS in quite some time, but I can’t imagine it would be very difficult to implement on SQS for those of you starting fresh.

CONTROLLER (aka SkyNet)

For me, this was the fun part. Initially we called it RubberBand, but we had an ususual partial outage one day which caused it to go berzerk and launch ~250 XL instances (~2000 normal EC2 instances) in a single call. Clearly, it had gained sentience and was trying to take over the world, so we renamed it SkyNet. (We’ve since corrected the problem, and given SkyNet more reasonable thresholds and limits. And yes, I caught it within the hour.).

SkyNet is completely autonomous – it operates with with zero human interaction, either watching or providing interactive guidance. No-one at SmugMug even pays attention to it anymore (and we haven’t for many months) since it operates so efficiently. (Yes, I realize that means it’s probably well on its way to world domination. Sorry in advance to everyone killed in the forthcoming man-machine war.)

Roughly once per minute, SkyNet makes an EC2 decision: launch instance(s), terminate instance(s), or sleep. It has a lot of inputs – it checks anywhere from 30-50 pieces of data to make an informed decision. One of the reasons for that is we have a variety of different jobs coming in, some of which (uploads) are semi-predictable. We know that lots of uploads come in every Sunday evening, for example, so we can begin our prediction model there. Other jobs, though, such as watermarking an entire gallery of 10,000 photos with a single click, aren’t predictable in a useful way, and we can only respond once the load hits the queue.

A few of the data points SkyNet looks at are:

  • How many jobs are pending?
  • What’s the priority of the jobs?
  • What type of jobs are they?
  • How complex are the pending jobs? (ex: HD video vs 1Mpix photo)
  • How time-sensitive are the pending jobs? (ex: Uploads vs rotations)
  • Current load of the EC2 cluster
  • Current # of jobs per sample processed
  • Average time per job per sample
  • Historical load and job performance
  • How close any instances are to the end of their 1-hour cost window
  • Recent SkyNet actions (start/terminate/etc)

.. and the list goes on.

Our goal is to keep enough slack around to handle surges of unpredictable batch operations, but not enough so it drains our bank account. We’ve settled on an average of roughly 25% of excess compute capacity available when averaged over a full 24 hour period and SkyNet keeps us remarkably close to that number. We always err on the side of more excess (so we get faster processing times) rather than less when we have to make a decision. It’s great to save a few bucks here and there that we can plow back into better customer service or a new feature – but not if photo uploads aren’t processing, consistently, within 5-30 seconds of upload.

SkyNet Lives - EC2 at SmugMug

Our workers like lots of threads, so SkyNet does its best to launch c1.xlarge instances (Amazon calls these “High-CPU Instances“), but is smart enough to request equivalent other instance sizes (2 x Large, 8 x Small, etc) in the event it can’t allocate as many c1.xlarge instances as it would like. Our application doesn’t care how big/small the instances are, just that we get lots of CPU cores in aggregate. (We were in the Beta for the High-CPU feature, so we’ve been using it for months).

One interesting thing we had to take into account when writing SkyNet was the EC2 startup lag. Don’t get me wrong – I think EC2 starts up reasonably fast (~5 mins max, usually less), but when SkyNet is making a decision every minute, that means you could launch too many instances if you don’t take recent actions into account to cover startup lag (and, conversely, you need to start instances a little earlier than you might actually need them otherwise you get behind).

THE MONEY

SmugMug is a profitable business, and we like to keep it that way. The secrets to efficiently using EC2, at least in our use case, are as follows:

  • Take advantage of the free S3 transfers. This is a biggy. Our workers get and put almost all of their bytes to/from S3.
  • Make sure you have scaling down working as well as scaling up. At 3am on an average Wednesday morning, we have very few instances running.
  • Use the new High-CPU Instances. Twice the CPU resources for the same $$ if you don’t need RAM.
  • Amazon kindly gives you 30 days to monetize your AWS expenses. Use those 30 days wisely – generate revenues. 🙂

WHY NO WEB SERVERS?

I get asked this question a lot, and it really comes down to two issues, one major and one minor:

  • No complete DB solution. SimpleDB is interesting, and the new EC2 Persistent Storage is too, but neither provides a complete solution for us. EC2 storage isn’t performant enough without some serious, painful partitioning to a finer grain than we do now – which comes with its own set of challenges, and SimpleDB both isn’t performant enough and doesn’t address all of our use cases. Since latency to our DBs matters a great deal to our web servers, this is a deal-killer – I can’t have EC2 web servers talking to DBs in my datacenters. (There are a few corner cases we’re exploring where we probably can, but they’re the exception – not the rule).
  • No load balancing API. They’ve got an IP address solution in the form of Elastic IPs, which is awesome and major step forward, but they don’t have a simple Load Balancer API that I can throw my web boxes behind. Yes, I realize I can manually do it using EC2 instances, but that’s more fragile and difficult (and has unknown scaling properties at our scale). If the DB issue were solved, I’d probably dig into this and figure out how to do it ourselves – but since it’s not, I can keep asking for this in the meantime.

Let me be very clear here: I really don’t want to operate datacenters anymore despite the fact that we’re pretty good at it. It’s a necessary evil because we’re an Internet company, but our mission is to be the best photo sharing site. We’d rather spend our time giving our customers great service and writing great software rather than managing physical hardware. I’d rather have my awesome Ops team interacting with software remotely for 100% of their duties (and mostly just watching software like SkyNet do its thing). We’ll get there – I’m confident of that – we’re just not there yet.

Until then, we’ll remain a hybrid approach.

%d bloggers like this: