Looks like I may have kicked over an anthill over at Sun – a bunch of Sun employees have contacted me. Sun’s response to my review has been very impressive – they’ve all been very polite, responsive, and anxious to get their hands dirty and see exactly what we’re doing. The people I’ve spoken to so far have been hardcore and extremely knowledgeable, so I expect some great results.
I’m gathering some additional data they asked for on Ubuntu Linux, and then we’re gonna get together at their Menlo Park offices next week and do some profiling to see both where the bottlenecks are and if there’s any other tweaking we need to do.
Linux is clearly my #1 target, since I’d rather not swich OSes unless there’s something super-compelling about Solaris (I remain open to the idea that there may be, but am skeptical), so we’ll be tackling that first. But the really good news for all the commenters is that we’ll be sticking another T1000 in one of our datacenters and try profiling Solaris on T1000 side-by-side with Ubuntu on T1000.
This should get interesting. :)
Shacknews reported that the Lapboard is finally for sale (and that the Phantom “console” is finally dead).
About time. It’s been obvious for years that the only thing worth having is the Lapboard and that the Phantom would suck, if it ever shipped.
So my Sun T1000 review got dugg, and commented on, and there’s one loud-and-clear message: people would like to see Solaris results.
So would I. But as I outlined in the review, I don’t have any Solaris expertise. I am a busy guy. :)
(I should re-iterate that Jonathan Schwartz asked for everyone to review the T1000 with Ubuntu, which is exactly what I did.)
So if anyone at Sun would like to spend a few hours with us and help us get this box configured for the same test on Solaris, I’d love to see what it could do and post a follow-up.
Ever since they first announced the Niagara processors at Sun, I’ve been excited. Could Niagara change my business? Who wouldn’t want tons of physical cores coupled with tons of virtual cores? At every tech conference I’ve tried to get hard data from the people manning Sun’s booths. At MySQL User’s Conference they were hyping MySQL performance, for example – yet there’s a huge MySQL bug where performance degrades with more CPUs, so that’s clearly not a great target for us (yet).
Nonetheless, the geek in me remained intrigued – I’ve believed for years that scaling # of CPUs, rather than purely speed of CPUs, was the future. One of the great parts of my job is that I get to play around with new toys and new technology, like Amazon’s S3 and Niagara, that can enhance our business or change it in some way. And every geek wants to dream that there’s some hot new CPU around the corner that’ll solve all their problems, right?
Sun has a great 60-day Try & Buy program. They make it basically as painless as clicking on the server you want, and a few days later it arrives. Very cool. Unfortunately, I haven’t used Sun gear since 1994, when I was using SunOS 4 (remember when SunOS was BSD-based?), so it would likely be time-consuming to try out both new hardware and new software. No thanks, I’m a busy guy.
Enter Jonathan Schwartz and his famous blog. Jonathan probably doesn’t remember me, but when I was 12 years old, I’d haunt the halls at NeXT every second I got and crashed NeXTWorld every year. I remember him. He was NeXT’s most important developer, and my father got the thankless task of buffering Steve Jobs and Jonathan. Both of them needed the other, but they couldn’t stand each other. Fun fun. :)
I’ve been meaning to touch base with Jonathan and see how he’s doing at his new job – and to see if a small web company like ours can shed any light on Sun’s direction. I think he’s got a very tough endeavor ahead of him – he’s gotta turn a massive company with lots of inertia around to compete in a whole new ballgame. For more than a decade now, datacenter computing has been shifting more and more rapidly towards free operating systems coupled with commodity hardware, and Sun nearly missed the boat. Now they’re scrambling to catch up. I believe Jonthan “gets it”, but we’ll have to see if he has the time and energy to really make the shift.
On June 16th, Jonathan posted a blog entry where he announced that Ubuntu Linux ran on Niagara, and that anyone who writes a thorough review would get to keep the box in question. Fantastic idea – I get to run Linux, which I know like the back of my hand, play with some hot new technology, and I get to keep the hardware for my time. Sold! So here we are, 60 days later, with a thorough review.
UPDATE: Jonathan has a new blog entry this morning about Niagara’s power savings. Pretty cool that you can get a rebate for using lower-power servers – but it doesn’t materially impact the conclusion of this review.
UPDATE #2: The comments here and on digg are pretty clear – you’d like to see Solaris results. Me too. Here’s an open call for help from Sun.
I should have posted this a few weeks ago, but better late than never. We now use Amazon S3 for a significant part of our storage solution. We’re absolutely in love with it – and our customers are too (even if they don’t know it).
As you probably know, SmugMug has been profitable since our first year, with no investment capital. We’ve had a great track record for keeping our customers’ priceless photos safe and secure using only the profits we’ve accrued to purchase our storage (yes, I said purchase. We have no debt – we own all of our storage, we don’t lease). And every SmugMug customer gets unlimited storage – so that’s no mean feat. (Currently, unlimited means ~300TB of storage and nearly 500,000,000 images. To put that into perspective, that’s more than 65,000 DVDs or 480,000 CDs).
But Amazon’s S3 takes our storage architecture to the next level:
- Your priceless photos are stored in multiple datacenters, in multiple states, and at multiple companies. They’re orders of magnitude more safe and secure.
- We’d already built a custom, low-cost commodity-hardware redundant scalable storage infrastructure. Nonetheless, it’s significantly cheaper to use S3 than using our own – especially when you factor multiple states & datacenters into the equation.
- Perhaps even more importantly, our cash-flow situation is vastly improved. Instead of paying $25,000 for a handful of terabytes of redundant storage up-front, even before they’re used, we now pay $0.15/GB/month as we use it.
- When we have some sort of internal outage with storage, it doesn’t matter – Amazon’s always on. They eat their own dogfood – S3 is in production use on dozens of Amazon products. We’ve had storage-related internal outages a few times already, and our customers haven’t been able to tell. We’ll still have rare outages on our site, unfortunately, (everyone does), but storage is now vastly less likely to be part of the cause.
- I started writing our S3 interface on a Monday, and by that Friday, we were live and in production. It really is that simple to pick up and use, and it was basically a drop-in addition to our existing storage.
- It’s fast. I don’t mean 15K-SCSI-RAID0-fast, but I do mean internet-latency-fast. It’s basically as fast as our internal local storage + the roundtrip speed of light to Amazon. I can measure the difference with computer timing, but in blind tests, humans haven’t been able to tell the difference. Everything we serve from Amazon feels fast.
I hate to admit this, but Amazon has built a playing-field leveler. It’s now much much easier for a competitor of ours to spring fully-formed from two guys in a garage than it was. Anyone who doesn’t get on board with Amazon S3 (or the inevitable S3 competitors) may get left behind. I’m glad we’re first, but I doubt it’ll last.
Tim O’Reilly, technology visionary extraordinaire, recently said of Sun’s new ‘Thumper’, the Sun Fire X4500: “This is the Web 2.0 server.” While I think Tim has perhaps the clearest vision in the industry, and the Thumper does truly look awesome, this time I think he may have missed the mark. The Web 2.0 server is *any* cheap Linux box coupled with utility storage like S3.
Initially this post had a lot of technical detail (I am the ‘Chief Geek’, afterall), but I removed it since it was probably getting boring. So this is the quick-and-dirty ‘Business Case for Amazon S3 and How it Helps our Customers’ post. If there’s enough interest, I can write up a detailed post about exactly how we use S3, how it works in conjunction with our own local distributed filesystem, and post our S3 library (which was derived from someone else’s). Post in the comments if that’s of interest.
Also, we’ll be presenting at a storage conference in Florida in late October (I’m sorry, I don’t have the name of the con with me, but I’ll update this post when I do), and have had a few other people request conferences talks on the subject. Comment if that’s of interest, too, so we know where to go speak.
Finally, one last geek thought: Anyone using the SmugMug API is now actually using multiple APIs through ours (depending on what you’re doing, you may be using Google and/or Yahoo, but you’re almost certainly using Amazon). The stack continues to grow.
UPDATE #1: In response to a comment below, I don’t feel like we “bet the company” on S3 – every photo our customers entrust us with, we keep local copies in our existing distributed storage infrastructure. We use S3 as redundant secondary storage for use in cases of outages, data loss, or other catastrophe.