SmugMug is on StackExchange!

February 8, 2012 1 comment

Long time readers will know that more than a year ago, I tried to get a SmugMug site going on StackExchange.

Today, we’re finally in Public Beta! We’re not done yet, though, so please head over and start asking & answering questions so we can flesh out the knowledge base and create a rich resource for everyone to use.

Thanks to everyone who helped get us here!

Categories: smugmug Tags: ,

Your Internet might die today. Please help.

November 16, 2011 Comments off

I’ll make this quick, since I’d like you to take action rather than site here and read my blog all day.

Congress may pass legislation that could destroy the freedoms we’ve all enjoyed on the Internet, possibly killing your favorite Internet sites like Facebook, YouTube, Twitter, Google, Yahoo, Flickr, SmugMug, and more. Virtually every site you know and love is vulnerable.

I’ve censored SmugMug’s logo all over our site today and linked to Mozilla’s excellent ‘Protect the Internet‘ page where you can quickly & easily take action and make a difference.

If you love the Internet, please do.

Categories: Uncategorized Tags: , ,

Best AWS outage post yet

April 26, 2011 1 comment

From The Cloud is not a Silver Bullet by Joe Stump at SimpleGeo:

…what is so shocking about this banter is that startups around the globe were essentially blaming a hard drive manufacturer for taking down their sites. I don’t believe I’ve ever heard of a startup blaming NetApp or Seagate for an outage in their hosted environments. People building on the cloud shouldn’t get a pass for poor architectural decisions that put too much emphasis on, essentially, network attached RAID1 storage saving their asses in an outage.

Go read the rest, it’s great. Better than mine.

How SmugMug survived the Amazonpocalypse

April 24, 2011 50 comments

tl;dr: Amazon had a major outage last week, which took down some popular websites. Despite using a lot of Amazon services, SmugMug didn’t go down because we spread across availability zones and designed for failure to begin with, among other things.

We’ve known for quite some time that SkyNet was going to achieve sentience and attack us on April 21st, 2011. What we didn’t know is that Amazon’s Web Services platform (AWS) was going to be their first target, and that the attack would render many popular websites inoperable while Amazon battled the Terminators.

Sorry about that, that was probably our fault for deploying SkyNet there in the first place.

We’ve been getting a lot of questions about how we survived (SmugMug was minimally impacted, and all major services remained online during the AWS outage) and what we think of the whole situation. So here goes.

http://jossphoto.smugmug.com/People/People-Digital-Art/2706381_EUKLw#209083325_MaNtP

HOW WE DID IT

We’re heavy AWS users with many petabytes of storage in their Simple Storage Service (S3) and lots of Elastic Compute Cloud (EC2) instances, load balancers, etc. If you’ve ever visited a SmugMug page or seen a photo or video embedded somewhere on the web (and you probably have), you’ve interacted with our AWS-powered services. Without AWS, we wouldn’t be where we are today – outages or not. We’re still very excited about AWS even after last week’s meltdown.

I wish I could say we had some sort of magic bullet that helped us stay alive. I’d certainly share if it I had one. In reality, our stability during this outage stemmed from four simple things:

First, all of our services in AWS are spread across multiple Availability Zones (AZs). We’d use 4 if we could, but one of our AZs is capacity constrained, so we’re mostly spread across three. (I say “one of our” because your “us-east-1b” is likely different from my “us-east-1b” – every customer is assigned to different AZs and the names don’t match up). When one AZ has a hiccup, we simple use the other AZs. Often this is a graceful, but there can be hiccups – there are certainly tradeoffs.

Second, we designed for failure from day one. Any of our instances, or any group of instances in an AZ, can be “shot in the head” and our system will recover (with some caveats – but they’re known, understood, and tested). I wish we could say this about some of our services in our own datacenter, but we’ve learned from our earlier mistakes and made sure that every piece we’ve deployed to AWS is designed to fail and recover.

Third, we don’t use Elastic Block Storage (EBS), which is the main component that failed last week. We’ve never felt comfortable with the unpredictable performance and sketchy durability that EBS provides, so we’ve never taken the plunge. Everyone (well, except for a few notable exceptions) knows that you need to use some level of RAID across EBS volumes if you want some reasonable level of durability (just like you would with any other storage device like a hard disk), but even so, EBS just hasn’t seemed like a good fit for us. Which also rules out their Relational Database Service (RDS) for us – since I believe RDS is, under the hood, EC2 instances runing MySQL on EBS. I’ll be the first to admit that EBS’ lack of predictable performance has been our primary reason for staying away, rather than durability, but a durability & availability has been a strong secondary consideration. Hard to advocate a “systems are disposable” strategy when they have such a vital dependency on another service. Clearly, at least to us, it’s not a perfect product for our use case.

Which brings us to fourth, we aren’t 100% cloud yet. We’re working as quickly as possible to get there, but the lack of a performant, predictable cloud database at our scale has kept us from going there 100%. As a result, the exact types of data that would have potentially been disabled by the EBS meltdown don’t actually live at AWS at all – it all still lives in our own datacenters, where we can provide predictable performance. This has its own downsides – we had two major outages ourselves this week (we lost a core router and its redundancy earlier, and a core master database server later). I wish I didn’t have to deal with routers or database hardware failures anymore, which is why we’re still marching towards the cloud.

Water On Fire©  2010 Colleen M. Griffith. All Rights Reserved.  This material may not be published, broadcast, modified, or redistributed in any way without written agreement with the creator.  This image is registered with the US Copyright Office.www.colleenmgriffith.comwww.facebook.com/colleen.griffithTo view my photography portfolio, click here:  www.colleenmgriffith.com/Galleries/Most-PopularDon MacAskill, COO and Co-Founder of Smugmug, published this photo in his April 2011 blog.  You can find his blog here:  https://don.blogs.smugmug.com/This photo was taken August 23, 2010 - it captures lava flowing from the Kilauea volcano on The Big Island of Hawaii. This was the main lava flow as it emptied into the ocean.  I was sitting in the bow of a boat, bobbing up and down in the ocean, when I captured this pic.  You can't really see the scale of the eruption in this shot, but about 15 feet of the "shore" is shown here.  You can see the steam created when the molten rock hit the ocean - it would sometimes obscure the lava flows - but it was like a dance, when the steam cloud would part for a few minutes allowing a glimpse of the lava beneath.

WHAT HAPPENED

So what did we see when AWS blew up? Honestly, not much. One of our Elastic Load Balancers (ELBs) on a non-critical service lost its mind and stopped behaving properly, especially with regards to communication with the affected AZs. We updated our own status board, and then I tried to work around the problem. We quickly discovered we could just launch another identical ELB, point it at the non-affected zones, and update our DNS. 5 minutes after we discovered this, DNS had propagated, and we were back in business. It’s interesting to note that the ELB itself was affected here – not the instances behind it. I don’t know much about how ELBs operate, but this leads me to believe that ELBs are constructed, like RDS, out of EC2 instances with EBS volumes. That seems like the most logical reason why an ELB would be affected by an EBS outage – but other things like network saturation, network component failures, split-brain, etc could easily cause it as well.

Probably the worst part about this whole thing is that the outage in question spread to more than one AZ. In theory, that’s not supposed to happen – I believe each AZ is totally isolated (physically in another building at the very least, if not on the other side of town), so there should be very few shared components. In practice, I’ve often wondered how AWS does capacity planning for total AZ failures. You could easily imagine peoples automated (and even non-automated) systems simply rapidly provisioning new capacity in another AZ if there’s a catastrophic even (like Terminators attacking your facility, say). And you could easily imagine that surge in capacity taking enough toll on one or more AZs to incapacitate them, even temporarily, which could cause a cascade effect. We’ll have to wait for the detailed post-mortem to see if something similar happened here, but I wouldn’t be surprised if a surge in EBS requests to a 2nd AZ had at least a deteriorating effect. Getting that capacity planning done just right is just another crazy difficult problem that I’m glad I don’t have to deal with for all of our AWS-powered services.

http://sreekanth.smugmug.com/Other/DailyPhotos/8242851_xEL2F#531020046_ctKmd

ADVICE

This stuff sounds super simple, but it’s really pretty important. If I were starting anew today, I’d absolutely build 100% cloud, and here’s the approach I’d take:

  • Spread across as many AZs as you can. Use all four. Don’t be like this guy and put all of the monitoring for your poor cardiac arrest patients in one AZ (!!).
  • If your stuff is truly mission critical (banking, government, health, serious money maker, etc), spread across as many Regions as you can. This is difficult, time consuming, and expensive – so it doesn’t make sense for most of us. But for some of us, it’s a requirement. This might not even be live – just for Disaster Recovery (DR)
  • Beyond mission critical? Spread across many providers. This is getting more and more difficult as AWS continues to put distance between themselves and their competitors, grow their platform and build services and interfaces that aren’t trivial to replicate, but if your stuff is that critical, you probably have the dough. Check out Eucalyptus and Rackspace Cloud for starters.
  • I should note that since spreading across multiple Regions and providers adds crazy amounts of extra complexity, and complex systems tend to be less stable, you could be shooting yourself in the foot unless you really know what you’re doing. Often redundancy has a serious cost – keep your eyes wide open.
  • Build for failure. Each component (EC2 instance, etc) should be able to die without affecting the whole system as much as possible. Your product or design may make that hard or impossible to do 100% – but I promise large portions of your system can be designed that way. Ideally, each portion of your system in a single AZ should be killable without long-term (data loss, prolonged outage, etc) side effects. One thing I mentally do sometimes is pretend that all my EC2 instances have to be Spot instances – someone else has their finger on the kill switch, not me. That’ll get you to build right. 🙂
  • Understand your components and how they fail. Use any component, such as EBS, only if you fully understand it. For mission-critical data using EBS, that means RAID1/5/6/10/etc locally, and some sort of replication or mirroring across AZs, with some sort of mechanism to get eventually consistent and/or re-instantiate after failure events. There’s a lot of work being done in modern scale-out databases, like Cassandra, for just this purpose. This is an area we’re still researching and experimenting in, but SimpleGeo didn’t seem affected and they use Cassandra on EC2 (and on EBS, as far as I know), so I’d say that’s one big vote.
  • Try to componentize your system. Why take the entire thing offline if only a small portion is affected? During the EBS meltdown, a tiny portion of our site (custom on-the-fly rendered photo sizes) was affected. We didn’t have to take the whole site offline, just that one component for a short period to repair it. This is a big area of investment at SmugMug right now, and we now have a number of individual systems that are independent enough from each other to sustain partial outages but keep service online. (Incidentally, it’s AWS that makes this much easier to implement)
  • Test your components. I regularly kill off stuff on EC2 just to see what’ll happen. I found and fixed a rare bug related to this over the weekend, actually, that’d been live and in production for quite some time. Verify your slick new eventually consistent datastore is actually eventually consistent. Ensure your amazing replicator will actually replicate correctly or allow you to rebuild in a timely fashion. Start by doing these tests during maintenance windows so you know how it works. Then, once your system seems stable enough, start surprising your Ops and Engineering teams by killing stuff in the middle of the day without warning them. They’ll love you.
  • Relax. Your stuff is gonna break, and you’re gonna have outages. If you did all of the above, your outages will be shorter, less damaging, and less frequent – but they’ll still happen. Gmail has outages, Facebook has outages, your bank’s website has outages. They all have a lot more time, money, and experience than you do and they’re offline or degraded fairly frequently, considering. Your customers will understand that things happen, especially if you can honestly tell them these are things you understand and actively spend time testing and implementing. Accidents happen, whether they’re in your car, your datacenter, or your cloud.

Best part? Most of that stuff isn’t difficult or expensive, in large part thanks to the on-demand pricing of cloud computing.

WHAT ABOUT AWS?

Amazon has some explaining to do about how this outage affected multiple AZs, no question. Even so, high volume sites like Netflix and SmugMug remained online, so there are clearly cloud strategies that worked. Many of the affected companies are probably taking good hard looks at their cloud architecture, as well they should. I know we are, even though we were minimally affected.

Still, SmugMug wouldn’t be where we are today without AWS. We had a monster outage (~8.5 hours of total downtime) with AWS a few years ago, where S3 went totally dark, but that’s been the only significant setback. Our datacenter related outages have all been far worse, for a wide range of reasons, as many of our loyal customers can attest. 😦 That’s one of the reasons we’re working so hard to get our remaining services out of our control and into Amazon’s – they’re still better at this than almost anyone else on earth.

Will we suffer outages in the future because of Amazon? Yes. I can guarantee it. Will we have fewer outages? Will we have less catastrophic outages? That’s my bet.

http://jossphoto.smugmug.com/Landscapes/Digital-Art-Outdoors/2636501_QmaFJ#140520059_wN6kq

THE CLOUD IS DEAD!

There’s a lot of noise on the net about how cloud computing is dead, stupid, flawed, makes no sense, is coming crashing down, etc. Anyone selling that stuff is simply trying to get page views and doesn’t know what on earth they’re talking about. Cloud computing is just a tool, like any other. Some companies, like Netflix and SimpleGeo, likely understand the tool better. It’s a new tool, so cut the companies that are still learning some slack.

Then send them to my blog. 🙂

Oh, and while you’re here, would you mind doing me a huge favor? If you use StackOverflow, ServerFault, or any other StackExchange sites – I could really use your help. Thanks!

And, of course, we’re always hiring. Come see what it’s like to love your job (especially if you’re into cloud computing).

UPDATE: Joe Stump is out with the best blog post about the outage yet, The Cloud is not a Silver Bullet, imho.

My love affair with StackExchange (and a plea for help)

December 15, 2010 10 comments

tl;dr: Help me, Obi-Wan Kenobi. You’re my only hope. Trying to move the Q&A for our community to StackExchange, which I love, and could use your help. Please click here, hit ‘Commit’, then click the link in your email if you can help us out.

Full Balcony, Columbia MOThis is the balcony at Harpo's, famous for post-football game riots.  Harpo's is where the mob of fans brings the goal post to cut it up with hack saws.  I've even heard that Harpo's actually distributes the hack saws.  It's been a while since I've been inside, it's not my kind of place.These B&W shots were inspired by Richard Kane, a great photographer.  I'm trying to learn how to use my limited processing skills to bring out the grains, but can't do it like the master.I've been too busy to take new photos, so I decided to do some new processing instead.Daily photo: November 5, 2010, taken October 26, 2010

I’ve been in love with StackOverflow since the day Jeff and Joel announced it. I taught myself how to write SmugMug using Google, not books, so a site like StackOverflow was a dream come true. However, I’d only been a lurker until recently – I figured I didn’t have the kind of time needed to really answer questions.

When ServerFault, and recently, the Photography & Photographic Editing StackExchange site came along, I started lurking more. Now that I run a photography company, I don’t get to take as many photos (go figure), but our employees and customers, obviously, take tons. And just like StackOverflow, the Photo site is full of great information shared by a wonderful, and growing, community. One of the things I really love about these sites is the overflow nature – many photographers are into geekier pastimes and vice versa, so there’s a natural compatibility with Photos and StackOverflow, for example.

Very recently, a few people have both asked “Why doesn’t SmugMug have a StackExchange site? Do you want to control all the data or something?” and “Why does your support forum suck so badly for finding answers to fairly simple questions?”. Our support forum does suck for this (good answers get buried pages deep, searching is tough, you don’t want to read all the discussions all the time, the same question get asked many times, etc…) Which, of course, got me thinking – surely we can just pay StackExchange to solve this problem (I do not want to control all the data – I just want an awesome experience).

StackExchange would be amazing for our community because:

  • The best answers to a question are always on top. No wading through pages of replies.
  • Searching is easy, both on StackExchange and via engines like Google.
  • The same questions won’t get asked over and over – they’ve already been answered and are easy to find.
  • The system encourages people to ask great questions and provide authoritative answers.
  • You can tell at a glance if someone answering your question knows what they’re talking about.
  • They’ve done this for large topics like StackOverflow already, so they understand the ins & outs of the process and software to support communities like this.
  • And more… See for yourself at StackOverflow

Turns out, they won’t take a check. We have to go through a formal community vetting process, to make sure our criteria matches with theirs. After that, it’s free (yay!), but until then, we can’t use it (boo!). The process seems like a sound way to ensure that a StackExchange site won’t just linger and drift into obscurity, and that it starts off with a nice subset of users as it ramps up. After glancing through the FAQ, it looked like we’d be a slam-dunk.

http://latoga.smugmug.com/Events/Public/2007-SF-Pillow-Fight/2483476_hUe4E#130341691_QqKNe

We have millions of paying customers, tens of thousands of whom are active posters on our forums at Digital Grin, and they’ve posted tens of thousands of threads with hundreds of thousands of replies around just the sorts of things we’d ask & answer on a StackExchange site. Best of all, we’d instantly have all the world’s experts (say, the top 100-200 most knowledgeable SmugMug people in the world) to jumpstart things. Sounds perfect, right?

Wrong.

Our problem is that during the ‘Commit’ phase, what matters more than warm bodies is your rank on other StackExchange sites, like StackOverflow, ServerFault, etc. And SmugMug’s community, while full of warm, eager bodies, isn’t brimming with StackExchange users. To make matters worse, I can’t reach out to my customers and ask them to ‘Commit’ because there’s nothing useful there to see, and explaining the process is difficult. When it’s in ‘Beta’, this probably gets much easier – since the site becomes fully functional at that point, we can begin directing *all* of our customers to it, and drive usage and adoption pretty rapidly.

When it comes right down to it, we’re really trying to expose tens of thousands (hundreds eventually and perhaps millions) of new people to StackExchange. I’m very confident that many of these photographers would love to be exposed at the same time to the Photography StackExchange site, and that the thousands of developers of our API would love to be exposed to StackOverflow.

This seems to be a win-win for everyone involved: SmugMug gets massively better community-driven Q&A, SmugMug’s customers get the answers to the questions they need answered, and StackExchange gets valuable users, traffic, and data. But we’re stuck with a chicken-or-the-egg problem – we can’t jumpstart our community of fresh new StackExchange users because we don’t have enough StackExchange users.

Bummer.

So our ‘Commit’ process has stalled. And I’d love to have your help. If you’d like to see a repository of authoritative answers for SmugMug questions, from Pro-related sales & money-making to Power-user customization to API developer questions, please, give us a hand.

http://latoga.smugmug.com/Events/Public/2007-SF-Pillow-Fight/2483476_hUe4E#130341982_3xnat

Click here, hit ‘Commit’, fill in your details (your SmugMug URL works as an OpenID!), and then click the link in the email they’ll send you.

Help me, Obi-Wan Kenobi. You’re my only hope.

P.S. – I’m a full-fledged addict on StackOverflow and ServerFault, now, not just a lurker. Hardest thing? Answering a question quickly enough that someone else hasn’t already answered it. Those communities are on fire!

Why ‘Be Passionate’ is Awesome Advice

November 10, 2010 9 comments

Inc has a article entitled Why ‘Be Passionate’ is Awful Advice where they baldly state that companies built on passion are fairy tales.

They’re wrong.

SmugMug is living proof. Here’s what it was like when we started, in response to their list of questions:

Is your idea really a business or just a hobby from which you’d enjoy creating a business?

SmugMug was an accident. The real business was a social network around video games. We started SmugMug as a side project (aka hobby) since we couldn’t find a good place to host our own personal photos online.

Can you actually realize your vision with your available time, capital, and resources?

We honestly had no idea, but it didn’t seem likely. The video game thing seemed like the real money maker, but it was going to take a lot more effort.

Is there a real, palpable, and evident demand for your offering among consumers? How big is the market?

No way. Every other photo sharing site was free. The bubble had burst and the Internet was a wasteland (this was 2002). The idea of charging for every single account seemed ludicrous to everyone but the two of us.

Does it have a real business model that will allow you to generate income immediately or a “maybe” model that might take years to (maybe) make a dime?

Real model? Sure, we were going to ask people to get their credit cards out and pay us real money. Was it going to actually generate income? We had no idea – asking people to get their credit cards out for a tiny, unknown, premium-only place to store your priceless memories wasn’t exactly a recipe that had investors foaming at the mouth.

Can you fully defend to your harshest critic the reasons why your business is capable of generating a dollar? How about $1,000? $100,000? More?

Nope. Our closest friends, include VC on Sand Hill Road and successful Internet entrepreneurs, all told us we were insane and we’d never make money. After we got a single signup our first week, and only 5 the entire first month, we started to believe them.

Approximately how long do you believe it will it to generate income? Can you survive that long? How about two or three times longer than what you anticipate (which is more realistic, if not generous)?

We hoped we’d generate income immediately. We did – about $30. We bought more ramen and corn flakes. We had no idea when meaningful income would arrive – ‘never’ seemed the most likely timeline.

Why have other similar businesses failed and how is your iteration of an idea different?

We had no idea. We didn’t bother to do any competitive research deeper than “Is there a good place online to host my photos? No? Guess we’ll build one.”

Is your idea a money pit or a cash cow? Will it need constant reinvestment or can you scale organically?

Neither? We didn’t have any money (our idea was so crazy that no-one would invest in us), so we knew it couldn’t be a money pit. But cash cow seemed unlikely, too.

Can you survive a total failure or are you “all-in” if you want to get started?

We could survive a total failure for no reason other than we didn’t put anything into the business other than blood, sweat, and tears. Zero dollars of investment, either by the founders or outsiders, meant we could easily walk away. Painful, but possible. (We bummed free rack space from a friend, used three ancient free servers from a failed dot com, and threw some code on it)

Today, we’re profitable, growing fast, and work with the greatest people on earth. We host billions of photos and videos, we have millions of passionate paying customers. Our offices are possibly the most fun in Silicon Valley, complete with gourmet food, giant gigapixel prints, dogs, go karts, dueling quadricopters, more 30″ displays than you’ve ever seen, and more:

Best of all? We work on the things we love because we own our own destiny. No outside investors meant we got to keep being passionate, day in and day out.

My advice to entrepreneurs? I’m absolutely positive that if you take your favorite hobby, mix in the Internet and a ton of hard work, you can build a great business. Whether you will or not is entirely up to you.

SmugMug is always hiring. Come do what you love, every day.

A look inside SmugMug

September 21, 2010 8 comments

Anton Lorimer, a SmugMug customer and unbelievable photographer and videographer, recently filmed an excellent look inside SmugMug for us:

Make sure to go Fullscreen and turn HD on, or click through to see A look inside SmugMug bigger.

There’s quite the discussion going on over at Facebook, too.

It’s awesome to take a step back and look at what all of our years of hard work have built. The future is bright, I’m excited for our customers and employees!

What the AppleTV should have been

September 1, 2010 48 comments

tl;dr: The new AppleTV is a huge disappointment. Welcome to AppleTV 2007.

SmugMug is full of Apple fanboys. (And our customer list suggests Apple is full of SmugMug fanboys) We watch live blogs or streams of every product announcement as a company, debating and discussing as it unfolds. Everyone was especially hyped up about this one because of the iTV rumors. When Steve put up this slide (courtesy of gdgt’s excellent live blog), there was actual cheering at SmugMug HQ:

What people want from AppleTV

Steve’s absolutely right. We really want all of those things. Apple described the problem perfectly. Woo! Credit cards were literally out and people were ready to buy. But after the product was demo’d, the cheers had turned to jeers. There was an elephant in the room that squashed almost all of these lofty goals:

There were no Apps.

APPS MATTER

Why does the lack of Apps matter? Because we’re left with only ABC & Fox for TV shows. Where’s everyone else? I thought we wanted ‘professional content’ but we get two networks? Customers are dying for some disruption to the cable business, and instead we get a tiny fraction of cable’s content?

Then we’re left with Flickr for photos. Flickr, really? When Facebook has 5-6X the photo sharing usage of all other photo sharing sites combined? And heaven forbid you want to watch your HD videos or photos from SmugMug – we’re only the 4th largest photo sharing site in the world, clearly not big enough if Facebook isn’t.

WHAT APPLETV SHOULD HAVE BEEN

If only there were a way to seriously monetize the platform *and* open it up to all services at the same time. Oh, wait, that’s how Apple completely disrupted the mobile business. It’s called the App Store. Imagine that the AppleTV ran iOS and had it’s own App Store. Let’s see what would happen:

  • Every network could distribute their own content in whichever way they wished. HBO could limit it to their subscribers, and ABC could stream to everyone. Some would charge, some would show ads, and everyone would get all the content they wanted. Hulu, Netflix, and everyone else living in perfect harmony. Let the best content & pricepoint win.
  • We’d get sports. Every geek blogger misses this, and it’s one of the biggest strangleholds that cable and satellite providers have over their customers. You can already watch live, streaming golf on your iPhone in amazing quality. Now imagine NFL Sunday Ticket on your AppleTV.
  • You could watch your Facebook slideshows and SmugMug videos alongside your Flickr stream. Imagine that!
  • The AppleTV might become the best selling video game console, just like iPhone and iPod have done for mobile gaming. Plants vs Zombies and Angry Birds on my TV with a click? Yes please.
  • Apple makes crazy amounts of money. Way more than they do now with their 4 year old hobby.

Apple has a go-to-market strategy. Something like 250,000 strategies, actually. They’re called Apps.

WORLDS BEST TV USER INTERFACE

The new AppleTV runs on the same chip that’s in the iPhone, iPad, and iPod. This should be a no-brainer. What’s the hold up? What’s that you say? The UI? Come on. It’s easy. And it could be the best UI to control a TV ever.

Just require the use of an iPod, iPhone, or iPad to control it. Put the whole UI on the iOS device in your hand, with full multi-touch. Pinching, rotating, zooming, panning – the whole nine yards. No more remotes, no more infrared, no more mess or fuss. I’m not talking about looking at the TV while your fingers are using an iPod. I’m talking about a fully realized UI on the iPod itself – you’re looking and interacting with it on the iPod.

There are 120M devices capable of this awesome UI out there already. So the $99 price point is still doable. Don’t have an iPod/iPad/iPhone? The bundle is just $299 for both.

That’s what the AppleTV should have been. That would have had lines around the block at launch. This new one?

It’s like an AppleTV from 2007.

Great idea! Google *should* open their index!

July 15, 2010 26 comments
Raised bridge on the Chicago River by Art Hill

Raised bridge on the Chicago River by Art Hill

tl;dr: Serving dozens (hundreds?) of crawlers is expensive. We could use an open index. Google’s?

Just read Tom Foremski’s Google Exec Says It’s A Good Idea: Open The Index And Speed Up The Internet article. And I have to say, it’s a great idea!

I don’t have hard numbers handy, but I would estimate close to 50% of our web server CPU resources (and related data access layers) go to serving crawler robots. Stop and think about that for a minute. SmugMug is a Top 300 website with tens of millions of visitors, more than half a billion page views, and billions of HTTP / AJAX requests (we’re very dynamic) each month. As measured by both Google and Alexa, we’re extremely fast (faster than 84% of sites) despite being very media heavy. We invest heavily in performance.

And maybe 50% of that is wasted on crawler robots. We have billions of ‘unique’ URLs since we have galleries, timelines, keywords, feeds, etc. Tons of ways to slice and dice our data. Every second of every day, we’re being crawled by Google, Yahoo, Microsoft, etc. And those are the well-behaved robots. The startups who think nothing of just hammering us with crazy requests all day long are even worse. And if you think about it, the robots are much harder to optimize for – they’re crawling the long tail, which totally annihilates your caching layers. Humans are much easier to predict and optimize for.

Worst part about the whole thing, though? We’re serving the exact same data to Google. And to Yahoo. And to Microsoft. And to Billy Bob’s Startup. You get the idea. For every new crawler, our costs go up.

We spend significant effort attempting to serve the robots quickly and well, but the duplicated effort is getting pretty insane. I wouldn’t be surprised if that was part of the reason Facebook revised their robots.txt policy, and I wouldn’t be surprised to see us do something similar in the near future, which would allow us to devote our resources to the crawlers that really matter.

Anyway, if a vote were held to decide whether the world needs an open-to-all index, rather than all this duplicated crawling, I’d vote YES! And SmugMug would get even faster than it is today.

On a totally separate, but sorta related issue, Google shouldn’t have to do anything at all to their algorithms. Danny Sullivan has some absolutely brilliant satire on that subject.

I owe Apple an apology

July 9, 2010 3 comments

In my last post, I wrote that Apple wasn’t giving App developers access to the high quality 720p video recordings from your Library on iPhone 4.

I was wrong.

The documentation wasn’t clear and we made a bad assumption. And talking to other developers, they all concurred that they couldn’t get access to the high-quality Library videos, either. For years, Apple didn’t let developers get access to the full resolution photos from your Library, which they now permit, so we assumed that’s what was going on here, too. Thank goodness we were wrong.

Sorry Apple!

Go grab the latest SmugShot and enjoy blur-free videos. 🙂