Home > datacenter, smugmug > Sun Honeymoon Update: Servers

Sun Honeymoon Update: Servers

April 11, 2007

It’s been two months since we divorced Rackable and married Sun as our new server & storage vendor and lots of people have been asking how it’s going. So while the ‘marriage’ is still early the server side of things is going really really well. We’re still starry-eyed in love. Our experience with Sun’s storage hardware isn’t nearly so rosy (in fact, it’s downright bad), but I’ll cover that in a near-future update.

So, what do we love about our new server partner?

  • We can standardize on a single server platform for 99% (if not 100%) of our future server needs. The SunFire X2200 M2 servers are 1U and scale up to 2 x dual-core Opterons with 32GB of RAM (and, as important, down to 1 Opteron w/2GB of RAM). For us, that’s huge. Imagine, if you will, some catastrophe befalling one of our database boxes that requires hardware replacement. Instead of having lots of expensive, idle, duplicate hardware around, we could literally crack open a web server, add some more RAM and an external HBA card, and boom, we have a new DB box. There are many reasons Southwest is the most profitable US airline and a huge one is standard components.
  • Their lights-out management (LOM) is a dream. I dinged the Sun T1000 last year because it’s LOM is pretty terrible, but the X2200’s LOM is freaking fantastic. How fantastic? Let me count the ways:
    • It’s ethernet rather than serial. Yay!
    • It can share the same ethernet port the OS does. One wire for both LOM and OS! Less datacenter mess. Double yay!
    • It has a built-in Web UI that lets you see and access all of the features, in addition to telnet and SSH.
    • The Web UI lets you actually view the VGA output on the console. Not just serial console redirection – actual video output.
    • The LOM lets you remotely mount ISOs, floppy images, etc. Got a CD or DVD on your desktop at the office that you wish was in the drive at your datacenter? No problem.
    • Built-in email notification ability for status changes.
    • Lots of SNMP settings. Haven’t played with this much yet, but it looks full-featured.
    • Lots and lots of hardware details, like motherboard and BIOS versions, NIC details, etc are all right there.
    • All of the statuses (fan speeds, temp readings, voltage indicators, etc), with tons of detail, are at your fingertips
  • Well built. First of all, it’s amazing what’s crammed into this 1U footprint. But second, it’s gorgeous inside. It’s clear that someone(s) spent a lot of time and energy working on the layout so that everything fit together just right. Feels like a labor of love. Nothing looks out of place.
  • I gave the T1000 props for the way Sun does illustrations on their lids to show what parts are hot-swappable vs cold-swappable and the X2200 is no exception. The lid is printed with all kinds of useful diagrams that make servicing the hardware much much easier. I’m a sucker for attention to detail (one reason I love Apple).
  • Turnaround time was excellent with both orders we’ve placed so far. We don’t have the luxury of planning for projects months and months in advance, so moving quickly when we need new hardware is key.
  • Pricing was great. Thanks to Sun’s AMD (and soon, Intel) server platforms, their pricing is competitive with everyone else. I truly believe that the baseline hardware (CPU, RAM, HDDs) has become commodity and that the differentiating value is in the extra technology (like LOM), service, and support. Sun gets this, I think.
  • Their rails just work. This is more rare than you might imagine – sucky rails really suck. Sun’s rails do what they’re supposed to – make it easy to install and, later, get access to your servers.
  • Their diagnostic CD was extremely useful and easy to use. This is an often overlooked area, but we were unlucky enough to get some bad RAM (see below), and this came in handy.
  • Fast. I thought this went without saying, since the performance bits are commodity components, but as you’ll see from the storage problems we had, speed on paper doesn’t always equal speed in the datacenter. These boxes are as fast as they should be – screaming.

So what’s not to like? Nothing’s bad enough that we’d kick Sun outta bed for eating crackers, but there are some quirks:

  • We bought these direct from Sun, with custom configurations, and I believe Sun is still trying to get their head around direct sales (vs VARs). As a result, it turns out that they arrived without all of the RAM already installed. No biggie, we just installed it ourselves. Only thing is, the RAM also wasn’t tested beforehand. We’re used to our systems being fully tested & burned-in prior to delivery, and sure enough, we got a bad piece of RAM. That sucked. For now, we’re just adding a day of burn-in to our install routine, but we’re hoping Sun standardizes on this in the future. UPDATE 1: Just got word from Sun, there is an option to have custom configs burned-in at no cost, but it adds an extra 2-3 weeks to the lead time. We’ll have to think about how to best use this here, since we usually want our gear fast.
  • As I mentioned in our engagement announcement, the sales and approval process (not the people) sucks. Having to go through the approval process over and over for each order that’s slightly different isn’t pleasant. Dell excels at this, by comparison. They fire off quotes (and hardware!) with lightning speed. Here’s how I wish it would work:
    • Sun goes through the approval process for SmugMug and assigns us a discount.
    • From then on, we can just go login to sun.com and place orders for as much (or as little) hardware as we want that day and it automagically applies our discount.
    • Should we think our sales volume warrants a bigger discount or something, we re-engage to re-evaluate.
    • Our sales team at Sun gets to focus on keeping us up-to-date on new technology, roadmap changes, and everything else without wasting time on the approval process for small orders that are similar to orders we’ve placed in the past.
    • We’re happy, Sun’s happy, everyone’s happy.

If we could change anything about them, would we? Of course!

  • Love to see dual power supplies. Since power supplies are a very common failure point for servers, we like redundancy here. (The moving parts fail far more often than our circuits do, so surprisingly, we don’t want dual power supplies to handle circuit failures).
  • While we’re dreaming, I’d love to see DC power as an option and remove AC from the equation. We could get lower failure rates, better power utilization, and better redundancy in one fell swoop.
  • And if we really want to get pie-in-the-sky, I’d love to see some sort of liquid or gas cooling system so we can get cooling efficiencies too. This is way outside of my field of expertise, so I don’t know how it would work, but Blackbox seems like it has some great stuff along these lines.

Stuff we really haven’t kicked the tires on yet:

  • We typically whip out our amp meter and take power readings as soon as we get new hardware in our datacenter, since power & cooling are huge concerns for us. This time, we were under such a time crunch (and so busy with all of the nasty storage problems I’ll blog about soon), I haven’t had time. I’m hopeful that all of Sun’s noise about power efficiency is reflected, but I won’t know for sure until I get the hardware out and test it.

And finally, everyone at Sun deserve a shout out. They’ve built a great product, and they’ve certainly showed us a great deal of support and personal attention, which we appreciate. If the people we’ve dealt with are any indicator of upcoming success, Sun’s future looks bright. (No pun intended).

I will post a follow-up shortly detailing the nightmare that our quest for fast DB storage became and what we’ve managed to do about it, but for now, I hope this helps anyone looking for server solutions.

Bottom line: I can’t recommend the X2200 M2 highly enough.

Categories: datacenter, smugmug
  1. April 11, 2007 at 10:44 pm

    The x2200 takes up to 64G memory (if you can find the 4G modules).
    I agree that the x2200 is a lovely machine although the lom still has a couple of issues when used over ssh.
    If you want dual psu, then you have to look to the x4100 instead and for DC there’s the netra x4200 (although that’s a 2U box).

  2. April 12, 2007 at 3:16 pm

    4G modules are sooo expensive, I don’t even consider that an option.

  3. April 12, 2007 at 3:31 pm

    i actually have one of these for sale that I won in an event that Sun Sponsored. It’s way too much machine for me, and would go to waste as an MP3 server. If anyone is interested.


    (not trying to spam here, just trying to connect a couple dots)

  4. Anonymous
    April 12, 2007 at 3:52 pm

    I just began working for a small consulting firm which, through a support contract with Sun, helps support StorageTek customers with non-Solaris environments (Windows 2003, Red Hat Enterprise 3/4, IBM AIX, HP-UX, etc.). I’ll be interested to read your post about the storage issues and any solutions you found.

  5. rob
    April 12, 2007 at 4:18 pm

    HP’s DL365 and DL360 (opteron and xeon respectively) have redundant power supplies available. They also have a really nice “lom” called iLo, which works very well. Particularly on newer models with “ilo2”, where the performance of the remote video is almost good enough to watch video over.

    Their DL380, which is essentially the 2U version of the above, has DC power available as well.

  6. April 12, 2007 at 4:23 pm

    @rob: The deal-killer on the HP gear is that it’s only 8 DIMM slots, or 16GB of RAM with reasonable 2GB DIMMs. We need 32GB in some of our boxes, and we’re not gonna pay for 4GB DIMMs.

  7. alq
    April 12, 2007 at 4:24 pm

    I concur, we just got 3 of these and they perform very well so far for the price. We use them in a development environment as a small-scale replica of the higher-end x4600M2 (a great machine for Oracle).

  8. Bill
    April 12, 2007 at 6:08 pm

    I’m glad the X2200 M2’s are a quality box… I work for an outfit that bought about 16 of the 1st gen X2100’s and they’re big, fat, stinkers. Harware RAID is completely broken and Sun’s only response is that it’s a “known issue”. Way to go, Sun.

  9. HAL9000
    April 12, 2007 at 8:02 pm

    Awesome to hear you’re enjoying these little guys so much. As a Sun employee it always fascinates me to see reactions to our gear — and I like to hear how they’re deployed client-side.

    Noisy little boxes ain’t they? The T1k’s and 2ks are pretty noisy too, not as much as the x4500’s or x4600’s though — they sound like jet turbines. The sales of the x2200 M2’s seem to be really hot nowadays. I’ve seen them newly deployed with a lot of companies (Joost, for example).

    Looking forward to your updates about your complaints with the data storage gear too. I’m a lab-rat so I’m around this gear all the time. What’s your storage setup like? 6140’s? Cheers!

  10. anon
    April 13, 2007 at 6:53 am

    Storagetek has some strengths and weaknesses. I would suggest IBM’s TotalStorage line. They sell disk and tape and software that just works.

  11. Kris
    April 13, 2007 at 5:30 pm

    “we could literally crack open a web server, add some more RAM and an external HBA card, and boom, we have a new DB box. ”

    Can you really run your DB on a 2 proc 1u box (supposing the u doesn’t matter except for cooling purposes)? I appreciate the RAM diferential but wouldn’t concurrency and other processing make you CPU bound on a 2 proc box? Just curious how much of a real solution this is. I get that its possible, but at customer acceptable latency levels?

  12. April 13, 2007 at 6:31 pm


    Oh yeah, easily. First, most of our DB workloads are I/O bound, not CPU bound. Second, we’re pretty good about spreading our workload around to scale out rather than scale up. There are areas we could improve on, but for the most part, we’re very efficient.

    All of our major DB boxes are 4 cores. And I think our performance speaks for itself – we have a stellar reputation in the industry and Alexa has us at:

    “Speed: Very Fast (81% of sites are slower), Avg Load Time: 1.0 Seconds”

    Which is particularly impressive for a media-heavy site like ours.

  13. alq
    April 13, 2007 at 7:25 pm


    We use these x2200M2s as load test replicas for bigger and more expensive production x4600M2s (I wish we could use the real deal for our load tests but finance won’t have it). Our database is very much CPU-bound because we push the run-code-in-the-db envelope pretty far (I heart PL/SQL…😉. So the 2200s would not withstand the kind of load that our live servers see but they are close enough to be able to baseline load tests. If I come across anything that contradict my first impressions I’ll be happy to blog about it.


    Great blog!

  14. Kevin
    April 16, 2007 at 9:01 pm

    @ Don

    Is there any reason why you didn’t go with Silicon Mechanics. They provide amazing rackmount servers with 16 DIMMs in 1U config.

    Linked below is just one example.


  15. April 17, 2007 at 12:39 am


    Sure. I haven’t heard of Silicon Mechanics. 🙂

    And since I’ve been burned with a medium-sized player (Rackable), I’ve made the decision that we’re going with a top-tier (Sun, Dell, HP, IBM) vendor from now out, especially since most of what we buy is commodity x86 so we’re not likely to see large cost variants between any of the top-tier providers or lesser tiers.

  16. Timothy
    April 17, 2007 at 1:32 pm

    @Don & @Kevin

    For what it’s worth, LiveJournal.com (millions of dynamic hits daily) exclusively uses Silicon Mechanics.


    And no, I don’t work for either company – just a big fan of Silicon Mechanics myself.

  17. April 17, 2007 at 5:58 pm


    Cool. Is there a compelling reason to use them? For me, I consider most of the core parts (CPU, RAM, HDDs) to be commodity, so there has to be something to “sell” me on one vendor over another.

  18. Timothy
    April 18, 2007 at 7:23 am

    I also agree the core parts are commodity now adays. There are two main reasons why I like Silicon Mechanics.

    1) The configuration options SM provides, especially for 1U systems is amazing. I’m not sure if you noticed the following page for 1U, you can see that you can find a server up to 16 DIMMs slots and 4 hot-swappable drives in a 1U enclosure. You can also have literally 2 servers in 1 enclosure for the price of less than 2 1U racks as well. It’s difficult to find a vendor that always for more than 1CPU, 2 disks and 4 DIMM slots in a 1U, yet SM goes waaay beyond that.


    2) Price. SM is easily 15-20% less expensive than it’s competitors (including Dell). And when you factor in that SM allows you to customize the servers in ways that Dell, HP, IBM and even SUN don’t allow – it’s just icing on the cake🙂

    Now, for the not so good. To some – they only feel comfortable buying from a top-tier supplier/vendor. SM is certainly not top-tier because they simply are not at that volume yet.

    Though what SM will provide is better than top-tier service. I can’t even describe how many times I’ve been on the phone with SUN or HP *trying* to buy a server and get it configured and the company made the process so painful I just gave up.

    I’ve also been on the phone with SM late at night and it was like talking to a good friend there to help me.

    Again, I’m not here to sell you on them. I just wanted to share my good experiences.

  19. April 23, 2007 at 12:44 pm

    Hi Don, How many Sun Fire x2200 servers did you buy and install? Are you using Solaris 10 OS? Thanks.

  20. Steve
    April 23, 2007 at 6:11 pm

    It appears that Silicon Mechanics (http://www.siliconmechanics.com/), mentioned above, is a Supermicro (http://www.supermicro.com/) VAR. I’ve got 4 Supermicro boxes, 3 of which are ~4 years old, and have had no problems. I still like a lot of the Sun niceties myself (have several Enterprise boxes as well) and have my most mission-critical stuff there, but Supermicro is good for more more budget-conscious purchases.

  21. April 23, 2007 at 8:56 pm

    @ Steve, you are correct, most of their boxes use Super Micro MOBO’s.

    Just today our demo box arrived from Silicon Mechanics. I heard of them from LiveJournal, gave them a call, and had a demo in my hands less than one week later (completely custom built to my specifications). The company has been very helpful so far. They know what they are talking about, you can throw linux terms at them and they actually respond intelligently (linux, Memcached, mySQL, centOS, etc).

    Lights-out is an add-on card about $75 on most of their boxes

    We are also debating between the “big boys” (dell, hp, ibm, sun) and going with a smaller player like Silicon Mechanics. Trying to weigh out all the advantages/disadvantages of each when building out our web application.

    For what it’s worth, I heard Second Life’s complete infrastructure runs on Silicon Mechanics. I’m getting the feeling somewhere down the line a lot of web companies decide a smaller server vendor is good for their business. We are trying to figure out *why*, just like everyone else it seems. I would love to hear more why SmugMug ditched Rackable… I have heard lots of good things about them. Maybe they are growing too fast?

    I can tell you 100% it’s a pain in the butt to buy stuff from HP & IBM and (probably) Sun. They want to take you out to lunch, have a bunch of meetings, and all of that crap. HP and IBM both passed us off to VARs, and I hate VARs. Getting a price quote is also a PITA, what a waste of time. When will they learn web companies don’t buy like fortune 500 companies. Dell is great for quick quotes and fair pricing. Silicon Mechanics is looking promising too. We are also waiting for some servers from Sun’s startup essentials program.

  22. April 27, 2007 at 8:14 am

    Their LOM sounds like it is the main value of going sun.

  23. May 4, 2007 at 3:23 pm

    @Don – I’m interested in Sun’s lights-out management. Can you point me to the appropriate reading, or share your impressions / experiences here?

  24. May 4, 2007 at 8:05 pm

    @Kevin, @Timothy, @Casey:

    This is Dave from Silicon Mechanics. Thanks so much for the kind words. We appreciate the advocacy!


    Nice Blog!

  25. D Froob
    May 16, 2007 at 9:50 pm

    Don —

    I see the X2200 has two 8 lane PCI slots. This may sound
    wierd but can you fit a pci-express card in those?

    I have an older opteron sun box and the 90 degree riser takes
    an 8 or 16 way slot and turns it into 1 lane expansion port —
    which no video supports!

    If I want to connect two or three monitors or use a faster video card is that possible with this server?

  26. D Froob
    May 16, 2007 at 9:52 pm

    Sorry last post was worded poorly and not edited for clarity.

    Recapping — do the 8 lane pci-express slots available in the X2200 support video cards?

    That’s what I meant to ask. And that question would include the consideration for a) space on the back of the machine for the card’s plate to be exposed, so one can actually use the video connector and 2) potentially clearance between the two video cards for onboard fans as modern video cards run hot.

  27. May 16, 2007 at 9:52 pm

    @D Froob:

    We have PCI-Express cards in them now, but they’re for storage (LSI SAS HBAs). I’m not sure how many lanes they’re using, since that’s not a likely bottleneck for anything we’re doing.

    Haven’t tried video cards, though. I used to make video games, so video card performance used to be something I followed extremely closely, but those days are behind me.

    You can probably dig into their tech specs, or contact Sun to ask them. I’d be happy to send your email address along to someone at Sun if that’d help.

  28. May 16, 2007 at 9:54 pm

    @D Froob:

    If you’re talking about super-high-end video cards, like modern NVidia or ATI cards, ,there’s probably not nearly enough clearance or cooling going on in there. They’re 1U, so things are gonna be cramped.

    But if your goal is to get multiple monitors, a lower-end multi-mon video card might work.

  29. Dr. Kenneth Noisewater
    May 25, 2007 at 9:35 am

    qft++ on the multiple PSU hit. We’ve got a ton of V210s (and earlier Sun 1Us) that have a single PSU, and there was a recall on the V210 PSU fan that required the box go down for the swap. Even though it was a free swap, that still meant downtime.

    BTW, keep an eye out for the Niagara2s, FPUs on each core should make it a more attractive proposition for mangling image data.

    Oh, and regarding vidcards, Sun sells Quadros for their workstations, and you can run Xorg on Solaris using NVidia Solaris drivers for x86 (downloaded from NVidia’s site). I still run Linux on my Sun Ultra 20 workstation because Sun still doesn’t have full support for its chipset (particularly the SATA drivers: in Solaris they’re still emulating PATA) in Solaris 10.

  30. Leif Bergman
    June 14, 2007 at 9:52 am

    Hey, thanks for an informative piece. I have a similar server (sun fire x4100 M2) with the LOM also and I’m very happy with it. However, you mention that it can share the same ethernet port as the OS and I don’t see how to make that work. Are you sure of this? Can you give a hint🙂



  31. Leif Bergman
    June 14, 2007 at 10:01 am

    Actually, I just realized that the LOMs are different between our two servers, so never mind…

  32. Proteus
    August 4, 2007 at 8:07 pm

    Why not go blade? 1U server farms are quickly becoming a thing of the past. Blades offer a common form factor, better density, lower power, and more flexibility. Want a database server? Pop in an quad socket blade (with 16 DIMM slots). Web? quad core Intel/Dual core AMD blades. Everything with perfect remote management, full hardware N+N redundant. Need storage? Fibre, iSCSI, SAS..its all good.

    Blades are the ultimate way to run a server farm.

  33. UKDesigner
    September 2, 2007 at 4:09 am

    Hi Don,
    A few questions!

    How are finding the X2200’s after six-months of running them – we are considering an X2200 to compliment our X2100, which has run for a year no problem.

    Secondly, for Apache/MySQL/PHP, would you reckon lower GHz/more RAM or higher GHz/less RAM important ? (ideally both of course but money rules!).

    Finally, is the extra performance using SAS better than the standard SATA? It costs quite a bit more – relevant for a high-load Apache/MySQL/PHP server?


  34. September 4, 2007 at 11:42 am


    We’re still loving the X2200s. Best servers on the market – period.

    For Apache and PHP, I prefer CPU speed (and # of cores) to RAM, but for MySQL, RAM rules. If you’re running all three on one box (which I don’t recommend), MySQL is the most important of those three components, so go with the RAM.

    SAS and SATA are going to be basically the same speed, in terms of interface, since drives are relatively slow compared to the interface. However, you can get 15K SAS drives, which you cannot get with SATA, so if you really care about drive speed (say, because you have a MySQL DB that doesn’t fit in RAM), 15K SAS is the way to go.

  35. UKDesigner
    September 12, 2007 at 9:04 am

    You might like to know that Sun are updating the X2200’s with the new Barcelona CPU…. we’re holding our order till the new ones ship ….

  36. UKDesigner
    January 19, 2008 at 8:36 am

    HI Don,
    Well, we’ve kinda hung off buying our X2200 – looks like it’ll be some while before they get the Barcelona anyway given that bug they found in it! Unfortunately, Sun’s Xeon offering, the X4150, is not really cost-efficient for us… looks like we’re going Supermicro with the Xeon’s ….

  37. December 8, 2008 at 1:44 am

    mlm marketing

  38. December 24, 2008 at 9:00 pm

    Thanks for the interesting article.

  39. January 7, 2009 at 3:12 pm

    See here:

  40. January 18, 2009 at 8:18 am

    CKyYoT hi! how you doin?

  41. January 21, 2009 at 1:12 pm

    tiQfIp hi! nice site!

  42. December 15, 2009 at 5:49 am

    Very informative blog!!! I am glad to know about that information is very useful and wonderful. I also agree with you. There are a lots of details about hardware like motherboard and BIOS versions, NIC details, etc are all right there. Here are lots of SNMP settings amnd many other things are available there.

  1. April 12, 2007 at 9:46 pm
  2. April 13, 2007 at 7:16 pm
  3. April 16, 2007 at 2:03 pm
  4. April 22, 2007 at 3:02 pm
  5. April 27, 2007 at 12:28 pm
  6. April 30, 2007 at 8:05 am
  7. May 16, 2007 at 1:15 am
  8. July 25, 2007 at 5:12 pm
  9. October 1, 2007 at 5:08 pm
  10. October 18, 2007 at 1:18 pm
  11. October 29, 2007 at 2:45 am
  12. October 10, 2008 at 3:15 pm
Comments are closed.
%d bloggers like this: