Search Results

Keyword: ‘geek’

Geek Dinner in Redmond, WA on April 9th?

April 4, 2008 13 comments
Seattle Skyline by Shay Stephens

photo by: Shay Stephens

I’ll be up in Redmond (first time!) for business next week (sorry sweetie!), and have no plans Wednesday evening, the 9th. Anyone wanna get some food, play some games, or just hang out?

Post in the comments.

Categories: personal Tags: , ,

Geek Dinner in Seattle on Jan 15th?

January 3, 2008 14 comments

I’m going to be in Seattle on the 15th and would love to get some great food with some great geeks in the area.

Anyone interested? If so, please comment or email and we’ll coordinate. Appreciate suggestions on places to eat that are tasty and good for geek conversations, too.

You can always see where I’m going at Dopplr, too, if that’s your thing.

Categories: personal

LunchGeeks Tomorrow (Aug 30th)!

August 29, 2007 4 comments

Wanna come geek out over the best Mexican food in Silicon Valley? LunchGeeks is on tomorrow. Sorry for the late notice – almost didn’t realize August was over until the Wall Street Journal mentioned us today.

Here’s the Upcoming event. Add yourself at Upcoming, on the LunchGeeks blog, or right here so I know if you’re coming.

I promise I’ll try to give more notice for September. 🙂

Categories: personal

LunchGeeks this week

May 21, 2007 2 comments

Going to eat at La Fiesta and geek out again this week on May 24th. Come if that sounds like your thing!

Don’t forget about all the Lunch 2.0 events this summer, too. (It must be intern season!)

Categories: personal

Geek out while you pig out

April 4, 2007 1 comment

I just started something called LunchGeeks. Basically, a bunch of geeks get together for lunch once a month, and self-organize into small groups to eat and compare notes. I’m hoping it’ll be fun for everyone. If it sounds like your thing, by all means, come and tell your friends. 🙂

Categories: personal

My love affair with StackExchange (and a plea for help)

December 15, 2010 10 comments

tl;dr: Help me, Obi-Wan Kenobi. You’re my only hope. Trying to move the Q&A for our community to StackExchange, which I love, and could use your help. Please click here, hit ‘Commit’, then click the link in your email if you can help us out.

Full Balcony, Columbia MOThis is the balcony at Harpo's, famous for post-football game riots.  Harpo's is where the mob of fans brings the goal post to cut it up with hack saws.  I've even heard that Harpo's actually distributes the hack saws.  It's been a while since I've been inside, it's not my kind of place.These B&W shots were inspired by Richard Kane, a great photographer.  I'm trying to learn how to use my limited processing skills to bring out the grains, but can't do it like the master.I've been too busy to take new photos, so I decided to do some new processing instead.Daily photo: November 5, 2010, taken October 26, 2010

I’ve been in love with StackOverflow since the day Jeff and Joel announced it. I taught myself how to write SmugMug using Google, not books, so a site like StackOverflow was a dream come true. However, I’d only been a lurker until recently – I figured I didn’t have the kind of time needed to really answer questions.

When ServerFault, and recently, the Photography & Photographic Editing StackExchange site came along, I started lurking more. Now that I run a photography company, I don’t get to take as many photos (go figure), but our employees and customers, obviously, take tons. And just like StackOverflow, the Photo site is full of great information shared by a wonderful, and growing, community. One of the things I really love about these sites is the overflow nature – many photographers are into geekier pastimes and vice versa, so there’s a natural compatibility with Photos and StackOverflow, for example.

Very recently, a few people have both asked “Why doesn’t SmugMug have a StackExchange site? Do you want to control all the data or something?” and “Why does your support forum suck so badly for finding answers to fairly simple questions?”. Our support forum does suck for this (good answers get buried pages deep, searching is tough, you don’t want to read all the discussions all the time, the same question get asked many times, etc…) Which, of course, got me thinking – surely we can just pay StackExchange to solve this problem (I do not want to control all the data – I just want an awesome experience).

StackExchange would be amazing for our community because:

  • The best answers to a question are always on top. No wading through pages of replies.
  • Searching is easy, both on StackExchange and via engines like Google.
  • The same questions won’t get asked over and over – they’ve already been answered and are easy to find.
  • The system encourages people to ask great questions and provide authoritative answers.
  • You can tell at a glance if someone answering your question knows what they’re talking about.
  • They’ve done this for large topics like StackOverflow already, so they understand the ins & outs of the process and software to support communities like this.
  • And more… See for yourself at StackOverflow

Turns out, they won’t take a check. We have to go through a formal community vetting process, to make sure our criteria matches with theirs. After that, it’s free (yay!), but until then, we can’t use it (boo!). The process seems like a sound way to ensure that a StackExchange site won’t just linger and drift into obscurity, and that it starts off with a nice subset of users as it ramps up. After glancing through the FAQ, it looked like we’d be a slam-dunk.

http://latoga.smugmug.com/Events/Public/2007-SF-Pillow-Fight/2483476_hUe4E#130341691_QqKNe

We have millions of paying customers, tens of thousands of whom are active posters on our forums at Digital Grin, and they’ve posted tens of thousands of threads with hundreds of thousands of replies around just the sorts of things we’d ask & answer on a StackExchange site. Best of all, we’d instantly have all the world’s experts (say, the top 100-200 most knowledgeable SmugMug people in the world) to jumpstart things. Sounds perfect, right?

Wrong.

Our problem is that during the ‘Commit’ phase, what matters more than warm bodies is your rank on other StackExchange sites, like StackOverflow, ServerFault, etc. And SmugMug’s community, while full of warm, eager bodies, isn’t brimming with StackExchange users. To make matters worse, I can’t reach out to my customers and ask them to ‘Commit’ because there’s nothing useful there to see, and explaining the process is difficult. When it’s in ‘Beta’, this probably gets much easier – since the site becomes fully functional at that point, we can begin directing *all* of our customers to it, and drive usage and adoption pretty rapidly.

When it comes right down to it, we’re really trying to expose tens of thousands (hundreds eventually and perhaps millions) of new people to StackExchange. I’m very confident that many of these photographers would love to be exposed at the same time to the Photography StackExchange site, and that the thousands of developers of our API would love to be exposed to StackOverflow.

This seems to be a win-win for everyone involved: SmugMug gets massively better community-driven Q&A, SmugMug’s customers get the answers to the questions they need answered, and StackExchange gets valuable users, traffic, and data. But we’re stuck with a chicken-or-the-egg problem – we can’t jumpstart our community of fresh new StackExchange users because we don’t have enough StackExchange users.

Bummer.

So our ‘Commit’ process has stalled. And I’d love to have your help. If you’d like to see a repository of authoritative answers for SmugMug questions, from Pro-related sales & money-making to Power-user customization to API developer questions, please, give us a hand.

http://latoga.smugmug.com/Events/Public/2007-SF-Pillow-Fight/2483476_hUe4E#130341982_3xnat

Click here, hit ‘Commit’, fill in your details (your SmugMug URL works as an OpenID!), and then click the link in the email they’ll send you.

Help me, Obi-Wan Kenobi. You’re my only hope.

P.S. – I’m a full-fledged addict on StackOverflow and ServerFault, now, not just a lurker. Hardest thing? Answering a question quickly enough that someone else hasn’t already answered it. Those communities are on fire!

What the AppleTV should have been

September 1, 2010 48 comments

tl;dr: The new AppleTV is a huge disappointment. Welcome to AppleTV 2007.

SmugMug is full of Apple fanboys. (And our customer list suggests Apple is full of SmugMug fanboys) We watch live blogs or streams of every product announcement as a company, debating and discussing as it unfolds. Everyone was especially hyped up about this one because of the iTV rumors. When Steve put up this slide (courtesy of gdgt’s excellent live blog), there was actual cheering at SmugMug HQ:

What people want from AppleTV

Steve’s absolutely right. We really want all of those things. Apple described the problem perfectly. Woo! Credit cards were literally out and people were ready to buy. But after the product was demo’d, the cheers had turned to jeers. There was an elephant in the room that squashed almost all of these lofty goals:

There were no Apps.

APPS MATTER

Why does the lack of Apps matter? Because we’re left with only ABC & Fox for TV shows. Where’s everyone else? I thought we wanted ‘professional content’ but we get two networks? Customers are dying for some disruption to the cable business, and instead we get a tiny fraction of cable’s content?

Then we’re left with Flickr for photos. Flickr, really? When Facebook has 5-6X the photo sharing usage of all other photo sharing sites combined? And heaven forbid you want to watch your HD videos or photos from SmugMug – we’re only the 4th largest photo sharing site in the world, clearly not big enough if Facebook isn’t.

WHAT APPLETV SHOULD HAVE BEEN

If only there were a way to seriously monetize the platform *and* open it up to all services at the same time. Oh, wait, that’s how Apple completely disrupted the mobile business. It’s called the App Store. Imagine that the AppleTV ran iOS and had it’s own App Store. Let’s see what would happen:

  • Every network could distribute their own content in whichever way they wished. HBO could limit it to their subscribers, and ABC could stream to everyone. Some would charge, some would show ads, and everyone would get all the content they wanted. Hulu, Netflix, and everyone else living in perfect harmony. Let the best content & pricepoint win.
  • We’d get sports. Every geek blogger misses this, and it’s one of the biggest strangleholds that cable and satellite providers have over their customers. You can already watch live, streaming golf on your iPhone in amazing quality. Now imagine NFL Sunday Ticket on your AppleTV.
  • You could watch your Facebook slideshows and SmugMug videos alongside your Flickr stream. Imagine that!
  • The AppleTV might become the best selling video game console, just like iPhone and iPod have done for mobile gaming. Plants vs Zombies and Angry Birds on my TV with a click? Yes please.
  • Apple makes crazy amounts of money. Way more than they do now with their 4 year old hobby.

Apple has a go-to-market strategy. Something like 250,000 strategies, actually. They’re called Apps.

WORLDS BEST TV USER INTERFACE

The new AppleTV runs on the same chip that’s in the iPhone, iPad, and iPod. This should be a no-brainer. What’s the hold up? What’s that you say? The UI? Come on. It’s easy. And it could be the best UI to control a TV ever.

Just require the use of an iPod, iPhone, or iPad to control it. Put the whole UI on the iOS device in your hand, with full multi-touch. Pinching, rotating, zooming, panning – the whole nine yards. No more remotes, no more infrared, no more mess or fuss. I’m not talking about looking at the TV while your fingers are using an iPod. I’m talking about a fully realized UI on the iPod itself – you’re looking and interacting with it on the iPod.

There are 120M devices capable of this awesome UI out there already. So the $99 price point is still doable. Don’t have an iPod/iPad/iPhone? The bundle is just $299 for both.

That’s what the AppleTV should have been. That would have had lines around the block at launch. This new one?

It’s like an AppleTV from 2007.

Success with OpenSolaris + ZFS + MySQL in production!

October 10, 2008 82 comments
Pimp My Drive by Richard and Barb

Pimp My Drive by Richard and Barb

There’s remarkably little information online about using MySQL on ZFS, successfully or not, so I did what any enterprising geek would do: Built a box, threw some data on it, and tossed it into production to see if it would sink or swim. 🙂

I’m a Linux geek, have been since 1993 (Slackware!). All of SmugMug’s datacenters (and our EC2 images) are built on Linux. But the current state of filesystems on Linux is awful, and it’s been awful for at least 8 years. As a result, we’ve put our first OpenSolaris box into production at SmugMug and I’ve been pleasantly surprised with the performance (the userland portions of the OS, though, leave a lot to be desired). Why OpenSolaris?

ZFS.

ZFS is the most amazing filesystem I’ve ever come across. Integrated volume management. Copy-on-write. Transactional. End-to-end data integrity. On-the-fly corruption detection and repair. Robust checksums. No RAID-5 write hole. Snapshots. Clones (writable snapshots). Dynamic striping. Open source software. It’s not available on Linux. Ugh. Ok, that sucks. (GPL is a double-edged sword, and this is a perfect example). Since it’s open-source, it’s available on other OSes, like FreeBSD and Mac OS X, but Linux is a no go. *sigh* I have a feeling Sun is working towards GPL’ing ZFS, but these things take time and I’m sick of waiting.

The OpenSolaris project is working towards making Solaris resemble the Linux (GNU) userland plus the Solaris kernel. They’re not there yet, but the goal is commendable and the package management system has taken a few good steps in the right direction. It’s still frustrating, but massively less so. Despite all the rough edges, though, ZFS is just so compelling I basically have no choice. I need end-to-end data integrity. The rest of the stuff is just icing on an already delicious cake.

The obvious first place to use ZFS was for our database boxes, so that’s what I did. I didn’t have the time, knowledge of OpenSolaris, or inclination to do any synthetic benchmarking or attempt to create an apples-to-apples comparison with our current software setup, so I took the quickest route I could to have a MySQL box up and running. I had two immediate performance metrics I cared about:

  • Can a MySQL slave on OpenSolaris with ZFS keep up with the write load with no readers?
  • If yes, can the slave shoulder its fair share of the reads, too?

Simple and to the point. Here’s the system:

  • SunFire X2200 M2 w/64GB of RAM and 2 x dual-core 2.6GHz Opterons
  • Dell MD3000 w/15 x 15K SCSI disks and mirrored 512MB battery-backed write caches (these are really starting to piss us off, but that’s another post…)

The quickest path to getting the system up and running resulted in lots of variables in the equation changing:

  • Linux -> OpenSolaris (snv_95 currently)
  • MySQL 5.0 -> MySQL 5.1
  • LVM2 + ext3 -> ZFS
  • Hardware RAID -> Software RAID
  • No compression -> gzip9 volume compression

Whew! Lots of changes. Let me break them down one by one, skipping the obvious first one:

MySQLMySQL 5.1 is nearing GA, and has a couple of very important bug fixes for us that we’ve been working around for an awfully long time now. When I downloaded the MySQL 5.0 Enterprise Solaris packages and they wouldn’t install properly, that made the decision to dabble with 5.1 even easier – the CoolStack 5.1 binaries from Sun installed just fine. 🙂

Going to MySQL 5.1 on a ~1TB DB is painful, though, I should warn you up front. It forced ‘REPAIR TABLE’ on lots of my tables, so this step took much longer than I expected. Also, we found that the query optimizer in some cases did a poor job of choosing which indexes to use for queries. A few “simple” SELECTs (no JOINs or anything) that would take a few milliseconds on our 5.0 boxes took seconds on our 5.1 boxes. A little bit of code solved the problem and resulted in better efficiency even for the 5.0 boxes, so it was a net win, but painful for a few hours while I tracked it down.

Finally, after running CoolStack for a few days, we switched (on advice from Sun) to the 5.1.28 Community Edition to fix some scalability issues. This made a huge difference so I highly recommend it. (On a side note, I wish MySQL provided Enterprise binaries for 5.1 for their paying customers to test with). The Google & Percona patches should make a monster difference, too.

Volume management and the filesystem – There’s some debate online as to whether ZFS is a “layering violation” or not. I could care less – it’s pure heaven to work with. This is how filesystems should have always been. The commands to create, manage, and extend pools are so simple and logical you basically don’t even need man pages (discovering disk names, on the other hand, isn’t easy. I finally used ‘format’ but even typing it gives me the shivers…). zpool create MYPOOL c0t0d0You just created a ZFS pool. Want a mirror? zpool create MYPOOL mirror c0t0d0 c0t0d1Want a striped mirror (RAID-1+0) w/spare? zpool create MYPOOL mirror c0t0d0 c0t0d1 mirror c0t0d2 c0t0d3 spare c0t0d4Want to add another mirror to an already striped mirror (RAID-1+0) pool? zpool add MYPOOL mirror c0t0d5 c0t0d6Get the idea? Super-easy. Massively easier than LVM2+ext3 where adding a mirror is at least 4 commands: pvcreate, vgextend, lvextend, resize2fs – usually with an fsck in there too.

Software RAID – This is something we’ve been itching for for quite some time. With modern system architectures and modern CPUs, there’s no real reason “storage” should be separate from “servers”. A storage device should be just a server with some open-source software and lots of disks. (The “open source” part is important. I’m sick of relying on closed-source RAID firmware). The amount of flexibility, performance, reliability and operational cost savings you can achieve with software RAID rather than hardware is enormous. With real datacenter-grade flash storage devices just around the corner, this becomes even more vital. ZFS makes all of this stuff Just Work, including properly adjusting the write caches on the disk, eliminating the RAID-5 write hole, etc. Our first box still has a battery-backed write-cache between the disks and the CPU for write performance, but all the disks are just exposed as JBOD and striped + mirrored using ZFS. It rocks.

Compression – Ok, so this is where the geek in me decided to get a little crazy. ZFS allows you to turn on (and off) a variety of compression mechanisms on-the-fly on your pool. This comes with some unknown (depends on lots of factors, including your workload, CPUs, etc) performance penalty (CPU is required to compress/decompress), but can have performance upsides too (smaller reads and writes = less busy disk).

InnoDB is notoriously bad at disk usage (we see 2X+ space usage using InnoDB) and while it’s not an enormous concern, it’d be something nice to curtail. On most of our DB boxes, we have idle CPU around (we’re not really I/O bound either – MySQL is a strange duck in that you can be concurrency bound without being either CPU or I/O bound fairly easily thanks to poor locking), so I figured I’d go wild and give it a shot.

Lo and behold, it worked! We’re getting a 2.12X compression ratio on our DB, and performance is keeping up just fine. I ran some quick performance tests on large linear reads/writes and we were measuring 45.6MB/s sustained uncompression and 39MB/s sustained compression on a single-threaded app on an Opteron CPU. We’ll probably continue to test compression stuff, and of course if we run into performance bottlenecks, we’ll turn it off immediately, but so far the mad science experiment is working.

Configuration

Configuring everything was relatively painless. I bounced a few questions off of Sun (imho, this is where Sun really shines – they listen to their customers and put technical people with real answers within arms reach) and read the Evil Tuning Guide to ZFS. In the end I really only ended up tweaking two things (plus setting compression to gzip-9):

  • I set the recordsize to match InnoDB’s – 16KB. zfs set recordsize=16K MYPOOL
  • I turned off file-level prefetching. See the Evil Tuning Guide. (I’m testing with this on, now, and so far it seems fine).

I believe since ZFS is fully checksummed and transactional (so partial writes never occur) I can disable InnoDB’s doublewrite buffer. I haven’t been brave enough to do this yet, but I plan to. I like performance. 🙂

Performance

This box has been in production in our most important DB cluster for two weeks now. On the metrics I care about (replication lag, query performance, CPU utliization, etc) it’s pulling its fair share of the read load and keeping completely up on replication. Just eyeballing the stats (we haven’t had time to number crunch comparison stats, though we gave some to Sun that I’m hoping they crunch), I can’t tell a difference between this slave and any of the others in the cluster running Linux. I sure feel a lot better about the data integrity, though.

Why not [insert other OS here]?

We could have gone with Nexenta, FreeBSD, Mac OS X, or even *gulp* tried ZFS on FUSE/Linux. To be honest, Nexenta is the most interesting because it actually *is* the Solaris kernel plus Linux userland, exactly what I wanted. I’ve played with it a tiny bit, and plan to play with it more, but this is a mission-critical chunk of data we’re dealing with, so I need a company like Sun in my corner. I find myself wishing Sun had taken the Nexenta route (or offered support for it that I could buy or something). Instead, we’ll be buying software service & support from Sun for this and any other mission-critical OpenSolaris boxes.

FreeBSD also doesn’t have the support I need, Mac OS X wasn’t performant enough the last time I fiddled with it as a server, and most FUSE filesystems are slow so I didn’t even bother.

Gotchas

  • On my 64GB Linux boxes, I give InnoDB 54GB of buffer pool size. With otherwise exactly the same my.cnf settings, MySQL on OpenSolaris crashes with anything more than 40GB. 14GB, or 21.9% of my RAM, that I can’t seem to use effectively. Sun is looking into this, I’ll let you know if I find anything out.
  • For a Linux geek, OpenSolaris userland is still painful. Bear in mind that this is a single-purpose box, so all I really want to do is install and configure MySQL, then monitor the software and hardware. If this were a developer box, I would have already given up. OpenSolaris is still very early, so I’m still hopeful, but be prepared to invest some time. Some of my biggest peeves:
    • Common commands, like ‘ps’, have very different flags.
    • Some GNU bins are provided in /usr/gnu/bin – but a better ‘ps’ is missing, as is ‘top’ (no, ‘prstat’ is *not* the same!), ‘screen’, etc (Can anyone even use remote command-line Unix boxes without ‘screen’? If so, how?)
    • Packages are crazily named, making finding your stuff to install tough. Like instead of Apache being called ‘apache’ or ‘httpd’, it’s called ‘SUNWapch’. What?
    • After finally figuring out how to search for packages to get the names (‘pkg search -r Apache’ – which doesn’t provide pleasant results), I discovered that ‘top’ and ‘screen’ just simply aren’t provided (or they’re named even worse than I thought). Instead, I had to go to a 3rd party repository, BlastWave, to get them. And then, of course, the ‘top’ OpenSolaris package wouldn’t actually install and I had to manually break into the package and extract the binary. Ugh.

Whew! Big post, but there was a lot of ground to cover. I’m sure there are questions, so please post in the comments and I’ll try to do a follow-up. As I fiddle, tweak, and change things I’ll try to post updates, too – but no promises. 🙂

UPDATE: One other gotcha I forgot to mention. When MySQL (or, presumably, anything else running on the box) gets really busy, user interactivity evaporates on OpenSolaris. Just hitting enter or any other key at a bash prompt over SSH can take many seconds to register. I remember when Linux had these sort of issues in the past, but had blissfully forgotten about them.

UPDATE: I went more in depth on ZFS compression testing and blogged the results. Enjoy!

Just so we're clear – I love Canon :)

September 24, 2008 23 comments

So you may have seen all the hooplah yesterday over Canon and Vincent Laforet’s amazing Canon 5D MkII footage. I thought maybe a little explanation was in order. First, a little background on me and Canon:

  • I, personally, am a monster Canon fanboy. I have a lot of cameras, and all of them – my collection of happy-snappys, our dSLRs, and even our video cameras – are Canon.
  • Our company is filled with Canon fanboys. We have more dSLR Canon bodies and lenses lying around than I can count.
  • The 5D MkII is the coolest camera I’ve ever heard of. Dozens of SmugMuggers have already pre-ordered them.
  • I’ve been dying to work with Canon since we started SmugMug. We’re a Top 500 website, we reach 6.5M people a month, our demographic is definitely high-end, and Nikon’s already in bed with Flickr. Sounds like a match made in heaven to me.

Ok, so now that I’ve set the stage, let’s talk about Vincent’s movie a little bit:

  • SmugMug had nothing to do with the production of the film. We didn’t even know it existed until we read this post on Vincent’s blog on Saturday afternoon.
  • The entire company caught fire. We lost our minds, we were so excited. Within minutes, we’d offered to provide *unlimited* HD bandwidth to Vincent. Bear in mind this was an unknown, but likely very large, cost with no real tangible upside. But we built this company because we love photography, video, and gadgets – and we’ve gotta stick with what we love.
  • Vincent enthusiastically took us up on our offer, and we all started brainstorming about how we could best release the film. Then we started brainstorming on how great this camera would be for indie photographers and filmmakers, and we lost our minds again. By Sunday morning, we had committed $25-50K to create a community-driven film using the Canon 5D MkII. (Note how fast things are moving – they were moving so fast, none of us had time to catch our breath).
  • We found out that Vincent had some awesome Behind-the-Scenes footage of the making of his film, Reverie, and so of course we offer to host that for free again.
  • The time for release arrived. Now, this entire time, we’ve never talked to anyone at Canon. As far as I knew, this wasn’t a Canon deal – Vincent clearly says Canon told him “You can then produce a video and stills completely independently from Canon U.S.A.”
  • We posted full HD versions of both Reverie and the behind-the-scenes footage for the world to see, crossing our fingers that our bandwidth bill wouldn’t be more than we could bear.
  • Our customers went bananas. Awesome! They’re thrilled we’re interested in this stuff, because they’re interested in this stuff. Ok, great, so maybe this bandwidth bill will pay of in goodwill. 🙂
  • The press went bananas – both mainstream and online. Awesome! They’re gaga over the user response and the remarkable camera.
  • We got busy (and I personally got busy) telling everyone, press and non alike, who called, emailed, tweeted, blogged, etc that the Canon 5D MkII is a game-changing camera the likes of which we haven’t seen before.
  • Canon asked Vincent to ask us to take Reverie down.

SAY WHAT?!

Canon asked Vincent to ask us to take Reverie down.

😦

Being a Canon fanboy, I quickly complied – with a very heavy heart. I felt like I’d been kicked in the gut by one of my heroes. I felt betrayed. I also wrote a few things in the heat of the moment that came out harsher than they should have (and thankfully I didn’t publish what I’d original written – whew!). I’ve now edited my blog post and would like to apologize to anyone at Canon who I offended – I certainly wasn’t attacking Canon’s great employees, I was just lashing out.

But look at it from my point of view. I was risking an awful lot of money on bandwidth (I doubt it would have topped 6 figures, but easily could have been in the 5s) because I’m a camera geek and I love this stuff. Customer goodwill is fabulous, and we love generating it, but we were really doing this because we love the camera, love the passion that went into the film, and love to help our industry. We were hopeful that that goodwill would come back to us someday – but even if it didn’t, the chance to be a part of something as momentous as this film from this camera was worth it. And a good chunk of the company busted their butts over the weekend to make this happen. We could have been playing with our kids or out shooting photographs, but instead we spent the weekend setting things up for Vincent’s release.

And instead of appreciating how generous I thought we were being, and appreciating the monster amount of PR they were getting (better PR than any amount of money can buy), it felt like Canon was arbitrarily cutting us off for no good reason. I found myself asking “Well, if they want to host it on their pages, why don’t they just embed the video from SmugMug? Then they get it for free and we still get to be involved. It doesn’t even have to show our logo or anything – just use Quicktime but use a file from SmugMug’s servers. We’d save them money!”. We just wanted to be involved. And no-one at Canon called or emailed us at all – as I’m writing this, I’ve still never talked to anyone at Canon on this “independent from Canon” project.

In the cold light of the next day, though, I can see that I overreacted. It’s a sign of my passion for Canon and their products. No-one overreacts when some bad company does something stupid. But just look at Apple – the instant they make a mis-step (or even perceived mis-step), everyone is up in arms, ready to lynch Steve. Why? Because their products are so dang good, everyone’s super-passionate about them. So I let my passion get the better of me. I still wish Canon had wanted to work together, or at least let us be part of the project, but does it really matter?

I’m still buying a Canon 5D MkII and, I’m sure, lots of Canon goodies to go along with it. So what are you waiting for? Go get your own. 🙂

Amazing Canon 5D MkII HD video footage!!

September 22, 2008 27 comments

Pulitzer Prize-winning photographer Vincent Laforet got his hands on a Canon 5D MkII for a weekend. Rather than shoot some quick stills, he rounded up an entire film crew and put them to work using the amazing 1080p video capture it offers – in helicopters, no less! When SmugMug heard about this, we went bananas and offered to host both the short film itself, Reverie, as well as the behind-the-scenes footage:

See it auto-sized for your screen & browser, view it in Hi-Def, or embedded below. Your choice.

Also, you can see the Behind the Scenes footage (want it in HD?):

Then we went a little more bananas, and ponied up $25K to sponsor a community-created film led by Vincent, with another $25K to follow if other sponsors get on the train. We think this camera is truly a game-changer and we’re thrilled to help visionaries like Vincent prove it to the world.

Now, the astute geeks in the audience will note that Reverie isn’t hosted in 1080p, but instead is at 720p. I wish it weren’t so, and we’re actively trying to get our hands on the 1080p footage right out of Final Cut so we can let everyone take a peek – but it’s not our footage, so I don’t actually have it. I believe Canon may be putting it online themselves, but if they don’t, I’ll do everything I can to put it up – so stay tuned to Vincent’s blog as well as my own.

Man I love this industry! Thanks Canon!