Home > datacenter > Datacenter love: Equinix

Datacenter love: Equinix

July 26, 2007

I write a lot about products and companies that have potential, but aren’t quite perfect, like Amazon Unbox on TiVo and lots of Sun stuff. But this week’s outage at 365 Main, a datacenter in San Francisco (which we don’t use), reminded me that there are a few products and companies we love that I don’t say nearly enough about. So I’ll start with our datacenter, Equinix, and try to post about some of the others, too.

SmugMug got its start with 3 old used VA Linux boxes (dual 700mhz Pentium 3s with 2GB of RAM which are still in production today and have been our most reliable boxes) from a dead dotcom, which we threw into a friend’s cheap rack at Hurricane Electric. Once the money started flowing in, and we ran into HE’s power contraints and poor bandwidth, we hunted around for datacenter space. Equinix had the very best reputation among the Operations crowd here in Silicon Valley, so we gave them a shot and pulled out of Hurricane Electric.

I should warn you up front that there’s a little “sticker shock” when you first talk with Equinix (ok, and every time you need to buy more stuff from them, it returns), but in the end, it’s well worth it. It turns out that in life, some things are worth paying for. Datacenter space is certainly one of those things (and we feel like photo sharing is too!).

In the ~4 years we’ve been with Equinix, we’ve had only one major problem: They sold our power out from under us (to Yahoo) which forced us to move from one of their locations to another. Ugh. Datacenter moves, especially with hundreds of terabytes of disks, really suck. Luckily, thanks to decent system architecture and some magic from Amazon S3, we were able to do 99% of our move during normal business hours over the course of a month with no impact on our users.

In all fairness to Equinix (though this is no excuse), they weren’t the only datacenter that had poorly prepared for the ‘Power is King’ change in the datacenter landscape that happened a few years back. Plenty of other companies with other providers tell me the same story, so we’re not alone. Datacenters all over the place used to sell you mostly based on space (square footage) rather than power (watts). They all got burned when CPU and server vendors started getting really fast & dense gear. Nowadays, almost the entire negotiation is regarding power and everyone has empty dead space in their rented cages. Such is life.

On the bright side, everything else about Equinix rocks:

  • Power. I’m surprised to hear all of the horror stories out of 365 Main because I assumed they were as good as Equinix has been for us. We haven’t had a single power-related outage in all of the years we’ve been there. It just works – and it’d better, that’s the biggest reason we use a datacenter.
  • Metro cross-connects. If you’re hosted in multiple Equinix datacenters in a single metro area, like we are, you can get cheap (a few hundred bucks per month) GigE cross-connects wired between your various locations.
  • Support. I’m still surprised every time we need to use Equinix’s support staff and they’re actually super-knowledgeable and helpful. I’m talking about hardcore networking and routing questions. BGP, whatever, you name it – they know it. Better than we do.
  • Equinix Direct. I’m always surprised when I talk to other Equinix customers who don’t know about this gem. It’s a way to provision your IP transit providers on a month-by-month basis with no minimum commits or contracts. You pick your providers and pay-as-you-go. Pretty sweet. We’re already directly multi-homed on GigE with multiple providers, but we mix in Equinix Direct to have access to still more. Best thing? ED doesn’t add an extra BGP hop, so your routes still look fast (as opposed to someone like InterNAP who adds an extra BGP hop to do similar stuff).
  • Security. 5 biometric scanners are between you and your cage when you enter the building, with live security on hand 24/7. Stuff like this is fairly common at high-end datacenters, but it’s important, so I’m mentioning it anyway.
  • Bandwidth providers. Equinix is a carrier-neutral facility, and basically everyone has connectivity there, so you can easily pick whomever you’d like to carry your traffic.

Of course, they do all of the other myriad things a datacenter is supposed to do. One of the reasons I haven’t blogged about them in the past is because they just work – and they work so well, I just don’t spend much time thinking about them.

Which, of course, is the way it’s supposed to be. 🙂

(Now, of course, I’ve jinxed the whole thing like Red Envelope and our datacenters are going to explode in a Martian Invasion. Sorry about that!)

Categories: datacenter
  1. July 26, 2007 at 10:52 pm

    You didn’t jinx your datacenter, you jinxed your VA Linux boxes. 🙂

  2. Dan
    July 27, 2007 at 2:19 pm

    I love Equinix and I’ve been colocating with them since 2002(’01?) at either 11 Great Oaks or 255 caspian.

    The biggest problem I have with their expansion in the bay area is that they rely almost 100% on the Metro fiber loop to get all of their connectivity from 11 Great Oaks (the original bay area location). A couple providers pop at Lundy or Caspian, but not many. If you’re at 11 Great Oaks, connectivity life is good. But they ran out of power a while ago.

    If the providers would build out to the newly expanded Datacenters, I’d be extremely happy, but They don’t feel the need to, because most people are happy using the metro network. Maybe I’m just overly paranoid, but I want my provider popped in the same building.

  3. July 31, 2007 at 11:32 am

    That’s great about Equinix.
    The issue with 365 Main was that the Hitec generators didn’t fire up when there was a power outage. Sounds like the sort of thing that could happen to any data center. What do you do when the power and the back up system fails?

  4. Joe
    September 23, 2007 at 1:21 pm

    I currently work for Equinix and have worked in the ERC department for over 3 years. Just wanted to say, great article. It is great to hear good things that come from all the hard work that goes on in the background to make everything work properly.

  5. jonathan
    November 23, 2007 at 5:41 pm

    “The issue with 365 Main was that the Hitec generators didn’t fire up when there was a power outage. Sounds like the sort of thing that could happen to any data center. “”””””What do you do when the power and the back up system fails””””””? ”

    then that’s no good! It all falls down to planned maintenance. Data centers should always make sure that backup IS or WOULD be working and not SHOULD be working.

  6. nottlv
    January 9, 2008 at 9:08 am

    While I generally like Equinix–we have space with them in a few facilities across the U.S.–to be fair they have had power related outages at some locations (the Chicago outage in the summer of 2005 and a partial outage in 2005 at one of their large Ashburn facilities comes to mind).

    You certainly have had a much better experience with Equinix remote hands than we have had, and we’ve never used them for anything as complicated as BGP or high level network work. We’ve had a range of issues, mostly with package handling and the occasional slow response to emergency requests. Equinix brands their remote hands service as “SmartHands”, which is jokingly referred to as “DumbHands” by many of the end users in the datacenter due to the quality of their work. We’ve ended up contracting out our remote hands work to other local companies in the area.

    I really don’t care much for EquinixDirect in practice, though the idea is interesting. Pricing is usually not that great (the downside of buying spot or sort term bandwidth contracts), and the mix of carriers is pretty weak at most locations. Your comment vis a vis Internap is incorrect. EquinixDirect uses standard BGP (i.e. best path is determined by the number of AS hops), while Internap does route optimization based on path analysis. The performance of Internap is going to be demonstrably better than EquinixDirect; aside from route optimization they simply have access to more, better performing routes after purchasing transit to 7-9 Tier1 carriers. One other major downside of EquinixDirect (or at least the last time we looked at it) is that as an end user you have very little (if any) control over the routing. If an end user contacts you with a subpar route using standard BGP, the way EquinixDirect is architected it’s almost impossible to change that (though in fairness this may have changed; we haven’t looked at it in over a year).

    All of this being said, I still feel Equinix is the creme de la creme data center operator. They are among the most expensive but you do get what you pay for.

  7. January 9, 2008 at 11:39 am


    We haven’t had any power issues, which is all I can really comment on.

    We *have* had some major package acceptance problems which completely slipped my mind when I wrote this, but I’ll have to post a follow-up or something. Thanks for the reminder!

    As for InterNAP vs Equinix Direct (We were a customer of both), the big big difference is that Equinix Direct doesn’t show up as a BGP hop. It doesn’t have it’s own ASN, so it “looks” like you have transit directly with each provider. This is a big major win over something like InterNAP because BGP sees the route as being shorter. Having the InterNAP ASN in the routing table means InterNAP is rarely chosen as the best route simply because it traverses more ASNs.

  8. nottlv
    January 13, 2008 at 11:17 am


    The comparison I’m making is more about between being single-homed with Internap and utilizing EquinixDirect with your own AS/BGP routing; you seemed to be implying that using EquinixDirect results in faster routes and I’m disagreeing with that. If you have your own AS than using Internap with a mix of other carriers probably doesn’t make a lot of sense; it dilutes the performance benefits of Internap and as you mention increases the AS hop count by one so it can make traffic shaping more difficult. I would guess that the majority of their customers don’t run their own AS; unless you need a lot of portable address space it doesn’t make a whole lot of sense. I view Internap differently than just a transit provider; I think of them as someone you outsource your BGP to so you don’t have to do that yourself. They don’t really have a network or backbone per se (my understanding is that they do have a small network for shuttling traffic between their POPs, primarily for their CDN); they’re just aggregating transit from 7-9 Tier 1 carriers and using path-based route optimization.

    From what I understand route manipulation by the customer is still a major limitation of EquinixDirect, and as I’m sure you’re aware straight AS-based routing using BGP frequently does not chose the optimal performing route. My point is that being single-homed to Internap is going to give your end users better performance than strictly using EquinixDirect; I don’t think there’s any question about that. EquinixDirect doesn’t have route optimization via path probes and has a much smaller list of Tier 1 carriers, so even if you use every carrier they offer you’ll still have less route diversity than Internap. There’s nothing magical about Internap; purchasing transit from 7-9 Tier 1 carriers gives you access to a lot of routes, and using path-based route optimization gives you a better chance of getting the best route out of that mix. You can do essentially the same thing on your own using either Internap’s Flow Control Platform or the Avaya/RouteScience devices. To do that at the level of redundancy of Internap, you’re talking a couple hundred thousand dollars just in initial capex outlay, plus the cost of network engineers, bandwidth, maintenance contracts, etc. For most organizations that doesn’t make fiscal sense.

  9. January 13, 2008 at 1:01 pm


    Ahh, yes, that makes sense. It didn’t even occur to me that we were comparing apples to oranges. 🙂

    We’re multi-homed with transit with multiple backbone providers, so we use ED as a relatively inexpensive way to “fill in the gaps” without nasty long term / high commit contracts.

    Your use case makes perfect sense, and in that regard, InterNAP is very useful. We just outgrew that use case years ago. 🙂

  10. August 9, 2008 at 9:03 pm

    If you need bandwidth out of Equinix 1735 Lundy SV3 in San Jose, Ca. Centauri Communications has a physical PoP there and it has dual fibers that interconnect back to San Francisco 200 Paul Ave. So another diverse route rather than going to 11 great oaks in the end which could have issues.

    Just email sales@centauricom.com


    Chris Demsey

  11. December 4, 2009 at 3:27 pm

    Very good article about Equinix. Hard work and what they get paid goes hand in hand

  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: