Archive
Make video more consumable
As a follow-up to my last post about video online, which was in turn related to the post about Scoble and linking before that, here’s my take on how video online could improve:
I was struck by how much I enjoyed reading Scoble’s rundown of what was in his Intel video. He even included some timestamps of some interesting bits, so you could skip right to it if you’d like. IMHO, this is a step in the right direction. What I’d *really* like to see is yet another step: chapters as separate entities with good, short summaries.
I’d love to see video with great content (I don’t think anyone’s debating whether Scoble gets great content – he clearly does, and lots of it) be available both as the long-form video (in this case, 40 minutes) as well as shorter (1-5minute) chapters that tie together. The chapters would need to be completely separate video files so I don’t have to download the entire 40 minute segment to find the bit that’s important and relevant to me. That’s a biggy so let me elaborate – even if I know the exact timecode for a segment in a longer video, I don’t want to download the whole thing and then jump through it. Instead, I want summaries of the chapters so I can quickly skim through the summaries and watch, say, 10 minutes out of 40 that’s highly focused.
Now, I know this won’t work for everyone. Scoble, for example, is passionate about not editing his videos because they’re conversations and I completely respect that as a viewer, a videographer myself, and an interviewee on his show. But even conversations often have chapters: the business chapter, the competition chapter, the upcoming features chapter, etc. I’m not advocating editing anything more than it’s already been edited – just making it more consumable.
There’s a reason YouTube became so wildly popular, and I don’t think it’s fair to brush the phenomenon off as “that was for fun stuff, but serious video needs to be long.” That’s a load of BS. I consume deep articles on technical subjects all the time, but I often skim for the good bits and jump out of the site to research related items before jumping back in. Both would be made possible by using text-based summaries with hyperlinked chapters as the basis for navigating a given video.
Thoughts?
Videos (and podcasts) suck sometimes
So I already commented on the whole Scoble thing, but I was commenting in general about how linking is usually better than not linking. I think everyone gets that.
But there’s another discussion going on that’s almost as interesting. Paul M. Watson got me thinking with his comment on my original post, which of course, led to me reading his blog. Specifically, his blog entry about how he doesn’t like video. It’s not too hard to find other opinions in like vein, such as Mathew Ingram’s post about video being Scoble’s achilles heel. Just take a peek over at Techmeme and Tailrank and you’ll see there are quite a few discussions, including this one, bubbling up.
I don’t really agree with the specific details (some don’t think his coverage of Intel was that great, others just think video isn’t that great, etc), but I agree partially in spirit. I watch the occasional ScobleShow episode or listen to the occasional podcast – but not often, and not religiously. I read his blog (and dozens of others) almost daily, though. So what’s the difference?
Blogs are massively easier to consume. You can skim them, you get headlines, you have hyperlinks to follow an extra interesting story around the web, etc. The list goes on and on – but what it comes down to, for me, is time. If I’m going to watch a 20 minute video or listen to a 30 minute podcast, I basically can’t do anything else during that time. I also can’t “skip to the good bits” easily.
With text (blogs, articles, reviews, interviews, whatever), those things don’t apply. I can consume blogs in 1-minute chunks of time I have between tasks. I can explore the web to find out more. I can easily find the stuff I’m really interested in.
Don’t get me wrong – I love certain videos and podcasts a great deal. But the signal-to-noise ratio has to be sky-high to get me to invest my time.
Now I’m just gonna sit back and see if any other interesting discussions come up over this furor… I’m sure there will be.
Grab some popcorn, enjoy the show.
Scoble: Throwing himself under busses so I don't have to.
My friend, Robert Scoble, has two great rants up about blogging & linking: Big gadget sites don’t link to blogs followed by Pissing off the blogosphere.
His main point is a valid one – far too many places, whether they be old media (The New York Times) or big blogs (Engadget) or even small bloggers who are afraid people won’t read every word they’ve written, don’t link to external sources.
This is actually a huge deal. The real, true power of the web is just that – it’s a web. Everything can be interconnected, and learning about or researching a subject can be vastly easier online than anywhere else. Using hyperlinks is the very reason content belongs online. If you don’t hyperlink your content, why on earth do you have it online?
People often wonder why “old media” is struggling to find a voice and an audience (and a business plan) for online content. Rarely is the fact that “old media” tend to be stingy linkers mentioned – but I suspect it’s actually a fundamental reason people choose to get much of their content elsewhere.
For me, I’ve never really thought about it in these terms until Robert brought it up. I’ve always tried to link everything and anything, even words like “Google” and “Sun” (surely you know that Google = http://www.google.com and Sun = http://www.sun.com). Why? Because if I were reading my blog, I would want to just be able to click on pertinent details and dig deeper rather than having to open a new window and Google for the subject at hand. It just makes sense. If a word or phrase can be hyperlinked, and it’s more useful that way, it should be.
By the way, this situation also exposes what’s so great about bloggers, especially the big and popular ones – they write about what’s on their mind, often not even thinking about the impact of what they’re saying. Communication is fast, transparent, and emotional.
Preach on, brother Scoble.
Kudos to Jonathan Schwartz
This is a great blog post from Jonathan, CEO at Sun. If more companies were like this, the world would be a better place.
Just imagine if your cell phone provider, for example, actually cared about whether you were happy and whether they were delivering good value to you, their customer? Or how about your broadband provider?
I know, I know, we’d all die of heart attacks from the shock. 🙂
I have to say, I was pleasantly surprised by Sun’s response to my detailed review of the Sun Fire CoolThreads T1000 server and our interest in buying them. Sun assembled a great team of engineers and spent some time with us trying to figure out why our application wasn’t performing up to snuff on their hardware.
For those of you who are still wondering what’s up with that, I’m the one dropping the ball, not Sun – I’ve just gotten swamped. But I’m still interested, so as soon as I come up for air, I’ll try to get more hours put into it. Last time we all worked on it, Sun wasn’t able to get more performance out of it than I was – but they’re anxious to try again and I’m anxious to let them.
In related news, my current server provider, Rackable, seems to have fallen on hard times. We started using them something like 4 years ago, and loved them. Lately, though, their stock is in the toilet, they’ve taken ages to get hardware to us (and a few other major brands I shouldn’t divulge), and worst of all, very expensive brand-new servers from them are failing left and right. Anyone at Dell or HP want our business?
Hello Speed, Beauty & Brains – Goodbye Alexa
Michael Arrington at TechCrunch just broke the story of our latest release and it’s a great write-up. We’re really thrilled to have this puppy out the door and let everyone play with it.
This is a pretty fundamental shift for us, and while I don’t want to give us too much credit, I really think it’s the beginning of a sea change on the web. There have been plenty of apps which launched with 100% AJAX, like GMail, but I can’t think of any that have yet made the plunge to change an existing, entrenched product with lots of users. I’m sure there have been a few, so forgive me if I overlooked you, but certainly not many – most of the big apps are still HTML driven. But I believe that’s going to change because the customer experience just gets so much better.
Everyone is going to do this. The only question is when? (Ok, two questions: And who will be left behind?)
We’ve been playing with 100% JavaScript/AJAX interfaces like this internally for quite some time, but there were some huge pitfalls that kept us from actually releasing it. When we finally solved the last hurdle we got really really excited – this was gonna be great for customers. The minor downside is that I expect our Alexa rank to plummet – we’re no longer really doing page views, which I think they track. (We already get unfairly penalized because so many of our customers use their own custom domain names, but this should really do us in). I could really care less from a business point of view – this is good for customers, after all, but the geek in me thinks that’ll be fascinating to watch and see what happens.
The benefits of this release are obvious: the interface is faster, prettier, and smarter. But the pitfalls are less obvious. Here are a few of the biggies:
- Search engines. I know Google’s been testing a more JavaScript-aware version of Googlebot, but how aware it really is is anyone’s guess. Certainly no crawlers I’m aware of do even a marginal job of crawling AJAX pages. But our customers spend tons of time captioning, describing, and keywording the 120+ million photos at SmugMug.
- Backwards compatiblity. We built our URLs from day one to be “permalinks” so they wouldn’t change if you used them in your blog and forum posts. We had to make sure that things still worked going forward.
- AJAX Permalinks. Now we needed new permalinks that describe various pieces of data for browsing SmugMug, but we also needed to keep them short so people could copy & paste easily, so they wouldn’t wrap in emails, etc.
- Stats tracking. Specifically external sources like StatCounter and Google Analytics which only track page views, not JavaScript UI interactions. Our customers, especially the tens of thousands of hardworking Pros who build their photography businesses at SmugMug, expect to still get useful and meaningful statistics on who’s viewing what.
- Browser interfaces. People expect the Back & Forward buttons to work properly, along with History and Bookmarks. Doing so in all three major browsers was thought to be impossible, and we failed many times. We solved this one, and this was the last biggie. I believe it’s an internet first. Jimmy will be updating his blog about exactly how we do it so anyone else can follow suit. It’s good for the web as a whole for this stuff to move forward.
It was an amazing team effort over here to get this thing done, including tons of our customers. GreenJimmy, our resident Web Superhero, especially drove this project long and hard. Hopefully we can talk more about what we did, technically, so others can avoid making the same mistakes we did.
I really have to also give props to the awesome team over at Yahoo! working on YUI. We couldn’t have pulled this off without their library (easily the best JavaScript library around). They did a profile on us just a week and a half ago, but that was before this release. Now we’re even more hardcore with all the YUI stuff. 🙂
Enjoy the release, and just wait to see what we’ve got coming next…. 🙂
Google's gone evil.
Some of you might remember how worried Google has been about the possibility of Vista and IE7 recommending Microsoft’s Live.com search engine over Google when they shipped.
I certainly know I remember meetings at Google where this very fear was front-and-center and how Googlers at those meetings were very passionate about the issue. They all agreed – it was horribly wrong of Microsoft to recommend an inferior search engine simply because they had upgraded their desktop software.
Have you tried searching for ‘blog‘, ‘calendar‘, or my personal favorite, ‘photo sharing‘ at Google today?
That’s right. Since Google’s own products aren’t good enough to make the top of the rankings themselves, they’re starting to promote them directly, outside of AdWords, with bright logos and top placement (which no-one else can use).
Don’t get me wrong – it’s Google’s search engine, so they can do whatever they like. But let’s not forget that Google’s Code of Conduct specifically talks about trust. That’s one of the big reasons you and I use Google instead of, say, Yahoo – because we trust that the best results will more likely surface to the top at Google, unhindered by self-promotion of inferior in-house products.
I don’t think there’s much mystery that WordPress, TypePad, and LiveJournal are better blogging platforms than Blogger. It’s a shame Google’s resorting to self promotion and damaging their credibility rather than improving their products.
Trust is easily lost, Google. Tread lightly.
Blake Ross, he of Firefox fame, has a great writeup on the same subject. I’m sure there will be others, so keep an eye on Techmeme and Tailrank.
Flickr far superior to SmugMug?
It sure is – if you’re not our target customer.
Andy Atkinson has a great write-up of some of the ways Flickr is better than SmugMug. And he’s right about lots of it.
I love reviews like this. First of all, SmugMug doesn’t do any competitive research – we just don’t have time. Instead, we listen voraciously to our customers and our todo list is almost exclusively made up of things our customers want us to add, fix, or change. (Sometimes we have to read between the lines, because they don’t always know exactly how to ask for it, but we do our best). Secondly, we’re awash in positive emails and reviews all the time. They’re nice, but they can give us a false sense of security and obscure the things that we really need to work on. Andy’s review nicely shines light on some areas where we’re weak and gives us a little insight into the competitive landscape at the same time. Thanks Andy!
Andy’s review is particularly refreshing because it’s the first one I can remember, either publicly or privately, where his point of view is that Flickr has more features than we do. Given that we release new features multiple times per month, and often once per week, we frequently (daily?) hear the opposite, and it’d be easy for us to assume we had every Flickr feature our customers wanted.
I left him a comment letting him know just how valuable his write-up is to us, and how much I enjoyed reading it, but he has moderation on. So I thought I’d talk about it here, on my blog, in case he doesn’t actually allow any comments.
As I told him, we’re not trying to be Flickr. We love Flickr, often refer customers that aren’t a great fit with SmugMug, and think it’s a great site that addresses a real mass-market need for photo sharing. But that’s not what SmugMug is – we’re not a mass-market brand, we’re not for everyone, and we think we have a very narrow bead on our target. Andy sure sounds like he’s much more of a Flickr customer than a SmugMug customer, so I’m surprised he lasted this long, but he makes some great points about things we should do better, even given our different focus:
- We don’t make it as easy to get your photos AND metadata back out of SmugMug. This one hit close to home because I’m very passionate about treating your photos as if they’re yours – not ours. We try very hard not to be the photo-sharing equivalent of the roach motel, where photos check in and never check out. We make it very easy to get your photos back out of SmugMug (they are your photos, after all, so you should be able to do whatever you want with them), but we don’t make it nearly as easy to get your metadata, like keywords and captions, back out too. Andy’s right on the money here, and I need to do a better job at this. You can use the API, of course, but we should make it easier than that.
- Our Geotagging interface is falling behind. We were first (we actually had two major releases of our mapping & geotagging stuff long before Flickr), but Flickr’s doing it better. We’re aware of it, and it’s on our radar – we just have to finish our next evolution. This sort of back-and-forth leapfrogging will always happen, I’m afraid. It’s the nature of a competitive business. One company does it best for a few months, then another takes the top spot. Back and forth.
- Our statistics could be better. He’s wrong about us not having per-photo statistics (we do), but he’s right that we don’t offer searching and sorting by other criteria, like comments. Doing better, richer statistics is something we’d like to do, and it’s good to see people like Andy calling us out on it.
- Photo books (and other similar items). He mentions QOOP specifically, but the real issue is that we don’t sell photo books (or calendars, greeting cards, etc). We want to, and we’re working hard on doing it (it’s an active project in the company right now, and has been for awhile), and I wish we’d done it by now, but QOOP just isn’t the answer. Their quality level wasn’t even close to our standards, either in terms of the finished product or the shopping cart experience. This is one area where Flickr’s target customers and ours are a big deciding factor – we’d rather not offer a product for awhile than offer something that’s not high-quality. Many of our customers build their businesses on SmugMug, and if we offer an embarassing level of quality, it reflects badly on them. We take that burden very seriously.
He has plenty of other good, interesting points that we’ll have to think about, but many of them are not really SmugMug’s focus, so I can safely shelve them for a later date. All the points above, though, are solid areas we need to work on. They’re core to our business, they’d enhance our customer’s experience, and we’re clearly not executing on all of them as well or as fast as we should be.
Anyway, great review and a good illustration of the differences between our two sites. I love reading stuff like this, so be sure to let me know if you blog about anything similar. We do, of course, read all of our email every day and usually respond in minutes – so keep the feedback coming!
Amazon S3: Show me the money
UPDATE 4/30/07: This post was written in November 2006, so these numbers are a little out of date. It’s now been 12 months and we’ve saved almost exactly $1M. You can see the most recent numbers, as of April 2007, in my ETech slides.
I still have some more Web 2.0 Summit stuff to write up if I get a few minutes today, but let me talk about Amazon’s S3 for a minute. At the conference, I was chatting with Michael Arrington of TechCrunch fame (who perfectly handled a blogosphere mini-explosion last week, I thought) and we got to talking about S3. He was impressed with how we were using it, but joked that our $500K saved number sounded like “complete bullsh*t”. I laughed along with him and assured him it was true, but on the way home I got to thinking that it IS a really big number to throw out there without details.
So here are the cold hard facts:
- Our estimate, as you can see in BusinessWeek’s cover story, is that we’re saving $500K per year. We’ve been using S3 for almost 7 months so far (we launched it on or around April 14th), so for my $500K estimate to be in the right ballpark, we should be somewhere near $291K saved to date (well, we don’t grow linearly, so less than that … but let’s do easy math, shall we?).
- We had roughly 64,000,000 photos when we launched S3. We now have close to 110,000,000 photos. Yes, that’s ~72% growth in 7 months.
- To sustain our pre-S3 growth, we were buying roughly $40,000 per month in hard disks plus servers to attach them to. We’re not talking about EMC or other over-priced storage solutions. We’re talking about single processor commodity Pentium 4 servers attached to really cheap Apple Xserve RAID arrays. Not quite off-the-shelf IDE disks, but once you factor in the reliability and managability, the TCO comes out to be in a similar ballpark (We’ve done it both ways).
- If you’re doing the math at home, $40K may seem a little high until you realize how our architecture works: We use RAID-5, with hot spares, and we have two entirely separate storage clusters. That means we have to buy 1.4TB of raw disk to store an actual 500GB.
- To sustain our current, Nov 2006 growth rate, we’d need to buy more like ~$80K per month. Let’s assume over the 7 months, it ramped from $40K to $80K linearly (it was actually more of a curve, but this makes the math easier). $40K + $46K + $53K + $60K + $66K + $73K + $80K = $418K
- Our datacenter space, power and cooling costs for those arrays is ~$1.36/month for every $100 of storage. (~$544month @ $40K, ramping to ~$1088/month @ $80K). $544 + $626 + $721 + $816 + $898 + $993 + $1088 = $5,686.
- It’s cost us some manpower to move everything up to S3. So while I expect to save money on manpower in the long run, currently it’s probably break even – I don’t have to install, manage and maintain new hardware, but I’ve had to copy more than 100TB up to Amazon. (We’re not done copying old data up yet, either)
- Total amount NOT spent over the last 7 months: $423,686
- Total amount spent on S3: $84,255.25
- Total savings: $339,430.75
- That works out to $48,490 / month, which is $581,881 per year. Remember, though, our rate of growth is high, so over the remaining 5 months, the monthly savings will be even greater.
- These are real, hard numbers after using S3 for 7 months, not our projections. They closely match (but are actually slightly better) than our projections.
So there you have it.
But wait! It gets even better! Because of the stupid way the tax law operates in this country, I would actually have to pay taxes on the $423K I spent buying drives (yes, exactly like the money I spent was actually profit. Dumb.). So I’d have to pay an additional ~$135K in taxes. Technically, I’d get that back over the next 5 years, so I didn’t want to include it as “savings” but as you can imagine, the cash flow implications are huge. In a very real sense, the actual cash I conserved so far is about $474,000.
But wait! It gets even better! Amazon has been so reliable over the last 7 months (considerably more reliable than our own internal storage, which I consider to be quite reliable), that just last week we made S3 an even more fundamental part of our storage architecture. I’ll save the details for a future post, but the bottom line is that we’re actually going to start selling up to 90% of our hard drives on eBay or something. So costs I had previously assumed were sunk are actually about to be recouped. We should get many hundreds of thousands of dollars back in cash.
I expect our savings from Amazon S3 to be well over $1M in 2007, maybe as high as $2M.
Perhaps most important, though, is the difficult-to-quantify time, effort, and mental thought we’re saving. We get to spend both that money and all of our extra time and effort on providing a better customer experience and delivering better customer service. Storage was a necessary evil that’s now been nearly removed as a concern.
Want more? I have some other posts on the subject:
And I’ll continue to post with more hard details, including our technical architecture and some of our code, as well. And yes, we’re starting to consume other Amazon services like EC2.
Web 2.0 Summit: Jeff Bezos
Jeff Bezos just gave a great presentation and had an interesting chat with Tim O’Reilly here at the Web 2.0 Summit. I’ve written about Amazon’s web services a few times, including the BusinessWeek cover story this week.
In case you don’t want to read the long-winded version, here’s a summary of what I think is really going on here:
- Amazon Web Services isn’t some strange deviation from Amazon’s core business. Instead, it’s an evolution of their business that makes a lot of sense. They’ve learned to scale datacenters well, and companies like ours don’t want to have to learn those same lessons, so we can build on Amazon. Amazon makes money, we save time (which is money) and get to focus on our application, and everyone wins.
- Google gets a lot of press for building a “WebOS” as they release web-based replacements for desktop applications. But they’re really focused on client-side desktop replacements, whereas Amazon is really focusing on backend, server-side recplacements. It’s less glamorous to the average consumer, but far more glamorous to anyone who needs those services to build their company.
I’m not sure everyone grasps how truly huge this is. I suppose that’s good, since we do and it gives us an edge.
Web 2.0 Summit: Eric Schmidt
Some notes from the Eric Schmidt piece:
- Google Video was doing well, YouTube was doing better. Something fundamentally changed last year where video became a prominent web format, so buying YouTube locked up that growth.
- Has a pretty good idea on how to monetize “other kinds of traffic” (other than text ads). He’s referring here to copyrighted data in particular.
- Worries about competition, particularly being a big target. Feels the best way to defend against this is to make it user-friendly and user-centric, as opposed to the typical large corporation defense of keeping everything proprietary.
- As long as people feel like they can easily switch from Google, that keeps Google honest and keeps them focused on their customer. Does that sound like anyone else we know?
- Stood up to the government request for index data for those very reasons. What user would want their data in the hands of someone else?
- Is happy to stand up for what they feel is right, but as soon as a federal judge rules that they have to do something, they will. They realized they’re beholden to US law.
- “It’s a mistake to bet against the Internet. Don’t bet against the Internet.”
- “Fundamentally better to keep your money in a bank than in your pocket.” … compares that to software belonging in a datacenter.
- Google’s not trying to position their stuff, like Writely, Calendar, GMail as an Office Suite. Instead, their focus is to enable casual sharing and casual communication.
- “You could pay people to use their product.” (in answer to a comment that free is pretty compelling)
- The engineer wins if there’s a difference between a sales guy and an engineer.
- All of the really good stuff comes out of the 10% of time employees spend on things other than their core projects (70%) and adjacent projects (30%)
- Google may appear chaotic, but it’s very strategic. Chaos is part of the creative process.
- “People don’t work for money. They work for impact.”
- Everything at Google is group-driven, no single decision makers. Been difficult dealing with partners because of this, but worked really well internally. Best decisions come from groups.
- “They always win.” (referring to Larry & Sergey and disagreements)
- “I’m the one with the experience who’s late. They’re the ones with the inexperience who’s early. That’s what makes it work so well.” They end up in the middle.
To be honest, I was surprised by how intelligent he came off (sorry Eric!). I’ve never met or interacted with him, but the blogosphere tends to poo-poo his impact on the company as just being a babysitter for Larry and Sergey. He knew what LAMP was (inluding the various meanings of the “P”), and made plenty of other comments that suggested he’s not just a figurehead at Google but is really involved with the vision and strategy. Refreshing and good to hear.


