Home > business, smugmug, web 2.0 > Sun Fire 'CoolThreads' T1000 review

Sun Fire 'CoolThreads' T1000 review

August 15, 2006

Ever since they first announced the Niagara processors at Sun, I’ve been excited. Could Niagara change my business? Who wouldn’t want tons of physical cores coupled with tons of virtual cores? At every tech conference I’ve tried to get hard data from the people manning Sun’s booths. At MySQL User’s Conference they were hyping MySQL performance, for example – yet there’s a huge MySQL bug where performance degrades with more CPUs, so that’s clearly not a great target for us (yet).

Nonetheless, the geek in me remained intrigued – I’ve believed for years that scaling # of CPUs, rather than purely speed of CPUs, was the future. One of the great parts of my job is that I get to play around with new toys and new technology, like Amazon’s S3 and Niagara, that can enhance our business or change it in some way. And every geek wants to dream that there’s some hot new CPU around the corner that’ll solve all their problems, right?

SunFire CoolThreads T1000 inside

Sun has a great 60-day Try & Buy program. They make it basically as painless as clicking on the server you want, and a few days later it arrives. Very cool. Unfortunately, I haven’t used Sun gear since 1994, when I was using SunOS 4 (remember when SunOS was BSD-based?), so it would likely be time-consuming to try out both new hardware and new software. No thanks, I’m a busy guy.

Enter Jonathan Schwartz and his famous blog. Jonathan probably doesn’t remember me, but when I was 12 years old, I’d haunt the halls at NeXT every second I got and crashed NeXTWorld every year. I remember him. He was NeXT’s most important developer, and my father got the thankless task of buffering Steve Jobs and Jonathan. Both of them needed the other, but they couldn’t stand each other. Fun fun. 🙂

I’ve been meaning to touch base with Jonathan and see how he’s doing at his new job – and to see if a small web company like ours can shed any light on Sun’s direction. I think he’s got a very tough endeavor ahead of him – he’s gotta turn a massive company with lots of inertia around to compete in a whole new ballgame. For more than a decade now, datacenter computing has been shifting more and more rapidly towards free operating systems coupled with commodity hardware, and Sun nearly missed the boat. Now they’re scrambling to catch up. I believe Jonthan “gets it”, but we’ll have to see if he has the time and energy to really make the shift.

On June 16th, Jonathan posted a blog entry where he announced that Ubuntu Linux ran on Niagara, and that anyone who writes a thorough review would get to keep the box in question. Fantastic idea – I get to run Linux, which I know like the back of my hand, play with some hot new technology, and I get to keep the hardware for my time. Sold! So here we are, 60 days later, with a thorough review.

UPDATE: Jonathan has a new blog entry this morning about Niagara’s power savings. Pretty cool that you can get a rebate for using lower-power servers – but it doesn’t materially impact the conclusion of this review.

UPDATE #2: The comments here and on digg are pretty clear – you’d like to see Solaris results. Me too. Here’s an open call for help from Sun.

digg_url = ‘http://digg.com/hardware/Amazing_comparison_of_a_32_core_Sun_running_Ubuntu_versus_4_core_AMD’;

THE GOAL

SmugMug has lots of different compute tasks, ranging from image manipulation (heavy CPU & math) to DB operations (threading and RAM heavy) to serving dynamic page content (lots of fast, light CPU requests). The obvious low-hanging fruit for something like Niagara, which has lots of slower cores, would be serving lightweight dynamic content. Specifically the thing we do most of – serving photos. We can load Apache processes up so each core gets a nice load.

As you can imagine, our “get photo” interface is heavily optimized for speed – it basically just checks to make sure you have permission to see the photo, grabs it from our distributed storage infrastructure (or Amazon’s S3, depending), and shoves it down the network pipe. We serve millions and millions of them a day. I’ve often wondered if lots of smaller boxes with slower CPUs might be more efficient at serving them than the relatively small number of really fast CPUs we have doing the task now. Niagara provided a perfect opportunity to find out.

Our goal is a fairly simple one with any new piece of hardware we test. And now is a great time to test this stuff – we need to buy a bunch more servers. We always want to get the lowest cost per processing unit per watt. $/CPU/watt. That’s it, it’s that simple. Can Niagara deliver?

THE SERVERS

In the far corner, wearing red shorts, is the defending champion – SmugMug’s baseline cluster member, Olde Faithful: a Rackable C1000. Weighing in at 1U half-depth, with two dual-core Opteron 270HE 2.0GHz processors, 4GB of RAM, no disks, and Gigabit Ethernet. Her contract is a svelte $3,595. Olde Faithful, and her precursors, have been getting the job done at SmugMug for years.

In the near corner, wearing the blue shorts, is the upstart contender from Sun, CoolThreads: a Sun Fire T1000, with a single 8-core 1.0GHz Niagara processor containing 4 virtual cores per physical core, 4GB of RAM, an 80GB SATA HDD, and quad Gigabit Ethernet. Her contract is a whopping $8,395*. CoolThreads looks sleek in her stylish 1U, 2/3rds depth metal case, but can she deliver the goods?

Sun Fire CoolThreads T1000 CPU and RAM.  No, I don't know which Sun employee those fingerprints belong to.  :)

* Note, we actually are testing a T1000 with 16GB of RAM since that’s what Sun sent us, but for this particular use case, we’re only going to use a max of 1.5GB, so we’d buy the 4GB model. The price shown, $8,395, is for the 4GB version with an 80GB disk, which we don’t need. Could probably save $100 if we could talk Sun into dropping the disk.

THE HYPOTHESIS

I’m not a big believer in the raw clock speed of CPUs, especially after watching our 2GHz Opterons clobber our 3GHz non-Woodcrest Xeons, but in this case I’m gonna give Sun the benefit of the doubt and guess that their clock speed is probably “high quality,” just like AMD’s.

CoolThreads’ 8 cores x 1.0GHz = 8GHz. Olde Faithful’s 4 cores x 2.0GHz = 8GHz. My hypothesis is that both are going to deliver roughly equivalent performance when it comes to the $/CPU portion of the equation. I predict a tie under load (which is what we care about most), but that when not loaded, Olde Faithful will serve photos faster.

Based on Sun’s literature, and the fact that Sun also sells Opteron boxes (for less money) but continues to sell and promote the Niagara boxes, too, I think there’s likely something there with the CPU/watt portion of the equation. I think CoolThreads will win the CPU/watt comparison, but it won’t save nearly enough dollars to make up the 2.3X base cost.

Remember, the equation we care about is $/CPU/watt. Power efficiency alone isn’t enough. If CoolThreads does win at the $/CPU/watt game, Sun can expect lots of future orders from us.

Pages: 1 2 3 4 5

Categories: business, smugmug, web 2.0
  1. gcc and others
    August 15, 2006 at 1:22 pm

    gcc ships with Solaris, its in /usr/sfw/bin, as is wget among others (including mysql).

    top is not bundled as it is a fundamentally broken tool. The solaris equivalent is prstat. prstat is part of the ptools – a whole group of really usefull tools for process analysis.

  2. August 15, 2006 at 1:25 pm

    Ahh, thanks for the info. I asked a few Solaris gurus (none of whom have migrated to 10 yet, though), and then all agreed that it didn’t come with any of those tools and that I’d have to get them from SunFreeware.com instead.

    Thanks for the heads up!

  3. Jeff
    August 15, 2006 at 2:38 pm

    If you go back and read closely (follow the links), the $700-$1000 Jonathan is referring to is a PG&E rebate under an energy savings program. Presumably based on the energy requirements of whatever servers you are replacing with more energy-efficient kit.

  4. Sam
    August 15, 2006 at 3:15 pm

    Don,

    Are you planning to review the Niagara box with Solaris, too? I don’t use Linux and would be interested in reading the review of Solaris on Niagara. The $/CPU/watt gains of Niagara is being hailed by people other than Sun also, so it is not likely that they’re all wrong. e.g. just check http://www.joyeur.com.
    Hopefully, someone will guide you on how to properly utilize the power of the server with Solaris, so I can read your review 🙂

  5. August 15, 2006 at 3:47 pm

    That’s awesome computing! There is no way my computer is able to match that.

    Other than large data analysis application, that require supercomputers, I don’t see the real need of these multiple cores PC at home.

  6. Dan
    August 15, 2006 at 3:53 pm

    Your conclusion about Solaris 10 is exactly the same as mine. No C compiler? Game over. If you are a Unix, you are up against FreeBSD and Linux. They come with C compilers. I understand that gcc & wget are part of Solaris 10, but I never found them when I played with Solaris for hours. Why hide them in a non-standard place? And the Solaris installer is prehistoric…

    Great review with a sad but non-unexpected conclusion given Sun’s track record the past few years…

  7. Seth Brundle
    August 15, 2006 at 3:57 pm

    I know that many moons ago Linus stated that Linux’s process scheduler aint so great compared to Windows/Solaris – that would exacerbate things for the Niagra vs olde faithful. Lets see Solaris results!

  8. August 15, 2006 at 4:09 pm

    @Seth Brundle:
    Before or after the implementation of the big O(1) scheduler?

  9. August 15, 2006 at 4:29 pm

    Would really want to see the Solaris results. Let’s face it just because you can run Linux on something doesn’t mean it runs well. Frankly I’m supprised to see it come even close to Linux on the x86. And we all know that if you have a rack or 10 of boxes in a datacenter you’re not installing the OS 1 system at a time, you’re making an image and pushing it out. So custom bulding an OS image isn’t that big a deal. I’ve found that the time I spend adding things to Solaris vs the time I spend taking things off Linux is roughly equal. Though ultimately you are right Solaris should come with at the very least cc installed.

  10. Gerard Snitselaar
    August 15, 2006 at 4:32 pm

    http://developers.sun.com/prodtech/cc/downloads/index.jsp

    As long as you register (free) with SDN you can download Sun Studio and get Sun’s c compiler.

  11. Brandon Black
    August 15, 2006 at 4:36 pm

    @Cody: I’m pretty sure that was before. 2.6’s scheduler is pretty tight, and a lot of work has gone into it for large #cpu scaling (mostly targetted at multi-proc multi-core opteron NUMA).

    SuSE Linux Enterprise Server 10 might be an interesting thing to try over Ubuntu, as they have some cpu/mem scalability patches that aren’t in mainline yet IIRC.

  12. The.Bit.Bucket
    August 15, 2006 at 4:42 pm

    I do wonder if the difference here is Ubuntu. Not that Ubuntu sucks (I use it and like it), but we’ve seen in the wild a 30%+ performance improvement between Solaris 8 and 10. Perhaps the Linux kernel is not as optimized for SPARC as Solaris. Perhaps the new mult-ithreaded network stack in Solaris 10 would help skew the performance numbers. And hey, just give dtrace a spin and that might be enough to convince you to switch.

    One more thing… it sounds like you buy quite a few servers. Is the pricing you quoted for the Rackable systems list or discounted? Given that GSA discount for Sun gear is usually 20% or more there may be some wiggle room on the unit costs.

  13. WISPGuy
    August 15, 2006 at 4:44 pm

    Thanks for the review. I have been considering selling Sun servers as part of a spam filtering solution. I am glad to see that someone has taken the time to test against live data and against another powerhouse server.

  14. Gavin
    August 15, 2006 at 4:53 pm

    hey, been a couple of years since I used Solaris but from memory it has compiler and make and so in in /usr/ccs/bin – you need to add it to your path.

    i agree it’s a pain though compared to the bsd/linux mob as you do still need to install a lot of packages from sunfreeware.

  15. August 15, 2006 at 4:55 pm

    Heh, I had to laugh at your review because of the Solaris stuff. It was almost exactly like my experiences years ago.

    Like others said, /usr/sfw is the location for non-sun supported software. Non-standard yes, but nothing too odd once you know about it. The reason it is there is sun compiles it for solaris, but doesn’t support it if you have problems. eg: you are on your own for configuring/etc…

    And top is awful on solaris, no real solaris admin uses top. prstat is the closest, have a look in /usr/sbin at the p* tools. Do a man on ptree or similar and look at the bottom of the man page for details.

    I won’t lie, Solaris is a completely different beast than Linux, 10 moreso than the prior releases with svc.configd. But install solaris and re-run the tests. The version of openssl shipped has the hardware acceleration of the niagra chip enabled. Should make some tests run faster. At work we only have t2000’s so I can’t really speak to the t1000, but it is a decent box. Especially when you need to run sparc software.

  16. Dan
    August 15, 2006 at 4:55 pm

    Yes, can we see Solaris on Niagara also?

  17. August 15, 2006 at 5:01 pm

    Oh and prefork? MIght also try using the worker mpm. Prefork… makes processes. You’ll get better performance on both systems with worker, especially on the T1000.

  18. August 15, 2006 at 5:16 pm

    Your review seems to have a little bit of an anti-Sun take. You complain about the lack of compilers and performance monitoring utilities but after someone makes a comment correcting you your article hasn’t been correct which spreads FUD about Solaris. A side not: Solaris has DTrace which blows any other OSes performance monitoring utilities out of the water. You take the time to gripe about Suns hardware using RARP but later mention you found boot net:dhcp. It had to take more time to complain about RARP than it would have taken to remove the excess memory and disconnect the hard drive to give a more accurate comparson.

    I use Linux and Solaris and like them both but this isn’t a fair review. For one the Ubuntu on Sparc support thing is just getting started and secondly Linux isn’t as optimized on Sparc as it is x86. You aren’t making a $/CPU/watt comparison like you claim because your discarding the OS that is best for the T1000. Not knowing Solaris really should not be an issue for someone with any type of UNIX exp. To those people complaining about non-standard locations, how hard is “find / -name gcc”?

    If all you’re really bench marking is Apache all you’d have to do to install the latest Apache 2.2 (Solaris comes with Apache 2 already) is:

    PATH=$PATH:/usr/sfw/bin:/usr/ccs/bin
    export PATH

    echo “GETTING APACHE 2.2”
    wget http://www.apache.org/dist/httpd/httpd-2.2.2.tar.gz

    echo “UNCOMPRESSING APACHE”
    gunzip httpd-2.2.2.tar.gz
    tar -xvf httpd-2.2.2.tar
    cd httpd-2.2.2

    echo “BUILDING APR 1.2”
    # Build and install apr 1.2
    cd srclib/apr
    ./configure –prefix=/opt/apr-httpd
    make
    make install

    echo “BUILDING APR-UTIL 1.2”
    cd ../apr-util
    ./configure –prefix=/opt/apr-util-httpd –with-apr=/opt/apr-httpd
    make
    make install

    echo “BUILDING HTTPD 2.2.2”
    cd ../..
    ./configure –prefix=/opt/httpd –with-apr=/opt/apr-httpd –with-apr-util=/opt/apr-util-httpd –enable-mods-shared=all –enable-ssl=shared –enable-ssl –with-ssl=/usr
    /local/ssl –enable-proxy=shared –enable-proxy-http=shared
    make
    make install

    Notice the only thing Solaris specifc about that is setting the PATH. I think the only valid complant you could have had at all would have been about having to setup as seperate boot-able copy of Solaris in order to remove the disk drive and continue to run Solaris without a hard drive. You could have even used a Linux system as a boot server for Solaris so you won’t have even had to setup a new server to make the T1000 diskless.

  19. August 15, 2006 at 5:21 pm

    @Chase:

    I was *excited* about the Sun T1000 or I wouldn’t have spent so much time with it. I certainly am not anti-Sun, and I’m sorry if I came across that way.

    I think the entire review, up to the data, reads like I’m enthusiastic about the possibilities. But I can’t ignore our hard data – the T1000 costs more than it should.

    Also, this definitely wasn’t just a stock Apache2 install – so it’s not quite that simple. I wouldn’t call this an Apache2 benchmark at all – Apache is far faster than these benchmarks.

    This was benchmarking *our software* which happens to use Apache2 to pass HTTP messages back and forth.

    As seen on the front page of this review, I’ve already asked Sun publicly if they’d like to assist with Solaris so it can be included. Hopefully they’re interested.

    Thanks for the comment,
    Don

  20. Sean
    August 15, 2006 at 5:24 pm

    I’m curious as to why one would want a C-compiler on a web server.

    Blaming Solaris for “non-standard” locations of tools (like top, which on Solaris has to grub through the kernel (a la RHEL3) looking for memory/processor statistics) isn’t going to fly, either. I could turn around and complain that Linux lacks the “standard” prstat/ptools. Or a working cachefs filesystem. Or DTrace. Or zones.

    This review reads like a “I stared at the machine sideways for a minute, and it didn’t look like linux, so I punted”. That’s OK, but don’t bash the OS because it’s not laid out like GNU/Hurd — I mean Linux.

  21. Brian Mingus
    August 15, 2006 at 5:37 pm

    Your results are confounded because you didn’t know how to use the native operating system. The comments here provide you with plenty of information to jump over that hurdle, so it’s a fair trade to your readers who tolerated your ranting to redo the review on Solaris. Besides, an objective review based on the facts will save your company the most money in the long run.

  22. August 15, 2006 at 5:50 pm

    I have to agree with some of the comments above. You are taking a SUN box, removing the default OS (An OS that from the ground up has been written for multi-threaded, multi domain computing on the UltraSparc arch) and installing an OS that CAN run on it. They only compiled up a version for the Niagra a little bit ago, I know for a fact it as not as tuned as Solaris in this CPU. I have a t2000 and a V40z on my bench right now. The v40z runs super great (TM) with Linux, even windows, but these are OS’s built for x86. Sparc is built for Solaris. Cut the t2000 up into 10 zones, EACH with there own oracle, weblogic, and Entrust PKI package running and watch the t2000 shine. The test results you have are invalid, for this test at least. Perhaps you should install Solaris for AMD64 on your AMD box? If you are going to go changing OS’s for the hell of it, 😉

    BTW who are these Solaris Gurus you had on staff….It sounds like the couldnt find there way out of the LOM.

  23. August 15, 2006 at 5:51 pm

    It was mostly the comment “I’d forgotten that Solaris contains *nothing* you need to actually use an OS” that made me really start thinking you weren’t giving Sun/Solaris a fair chance. If you take a good look at Solaris 10 I’m sure you’ll change your mind. Your software wasn’t really emphasized that much in the review. Besides using Apache for connetion handling it’d be nice if there was a little more information on the architecture and threading model of your software. Is it IO intensive or CPU intensive? Was the extra memory giving the T1000 an unfair advantage for your app and the T1000 really sucks, is there lots of floating point math, etc… I know not eveyone’s software is going to benefit from lots of cpu cores but it’d be good if you explain why your software was a good real-world test case. If Sun doesn’t take you up on helping getting your app running with Solaris I could help you out some.

  24. August 15, 2006 at 6:00 pm

    We received our Sun Fire T1000 about a month ago to test. I was extremely surprised by the performance… and not in a good way.
    Single threaded, it takes about as long as an iBook G4@1.0GHz to compile our code (Java).
    We decided to try it out for our purposes to see if it could outdo our current Xeon systems. What we’re doing is somewhere along the line of video rendering, though it’s more just moving data around, the content is actually already prerendered.
    Something’s seriously wrong with the configuration out of the box, because it clearly likes its idle cycles. I’m still working on it, but what we could do 8 of in two seconds on a Dual Xeon 3.4 we’re seeing take four minutes on the T1000 (for one, not eight). I’m working on it, but even the compile times don’t bode well.

    And then there’s the fact that Solaris is just such a pain… still haven’t got it nicely talking to Windows servers using SMB.

  25. Mel
    August 15, 2006 at 6:08 pm

    With regards to the power numbers… The 22.8 watts of power difference can come from a number of places. A spinning IDE disk at idle is probably less than 1-2 watts, which doesn’t explain the difference at all. Idle RAM power consumption depends heavily on how the RAM is banked and organized by the architecture. In theory, you could design a motherboard that keeps unusued RAM DIMM’s in a low-power memory-retention state, and I bet the unusued 12GB would use less than 1 watt if done correctly. But it’s tough to say how *this* much RAM has been organized on the motherboard. It might account for 22.8 watts, and it might not.

    One of the biggest differences in power consumption could actually be the power supply. There are sometimes wide variations in supply efficiency which could easily account for 10-20 watts of difference, even if the two systems are actually consuming the same levels of power after the supply.

  26. Bill Rees
    August 15, 2006 at 6:17 pm

    One thing to realize about the Sun pricing on the T1000 is that is a retail price that is usually discounted to customers during sales negotiations. While the actual cost is probably still higher, it isn’t the retail price.

    b

  27. Aaron
    August 15, 2006 at 6:22 pm

    For those of you who are complaining about him removing Solaris for the tests… you are forgetting a crucial piece of the story. In Jonathan Schwartz’s blog post (see http://blogs.sun.com/roller/page/jonathan?entry=ubuntu_on_niagara_and_platinum), he specifically called for testing with Ubuntu on these boxes. THAT is why he installed Ubuntu in favor of Solaris for the tests.

  28. Rob
    August 15, 2006 at 6:24 pm

    Great blog! I’ve added a link to your blog on Blog of the Day under the category of Computers. To view the feature of your blog, please visit http://blogoftheday.org/page/111925

  29. David
    August 15, 2006 at 6:56 pm

    I have the Sun X2100… Sun’s AMD based competitor.

    For whatever reason the on-board copy of solaris wouldn’t boot and install, luckily I bought the cdrom and installed from media.

    As others have stated the Sol 10 distrib does actually come with everything you need, and wget is in place if not.
    Most admins config the servers for blastwave’s wget servers because they have newer package versions (www.blastwave.org).

    Anyway, I would definately repeat your benchmark with Solaris 10 loaded.
    No telling what sort of OS optimizations you’ll benefit from running Sun’s OS on Sun’s hardware.
    But my guess is you should see some sort of bump?

    Good luck.

  30. August 15, 2006 at 8:29 pm

    “she uses 180 watts, but lets give her the benefit of the doubt and assume she’s 20% more power efficient than Olde Faithful when she only has 4GB of RAM and no HDD. That’d bring it to 125.76 watts (less than Olde Faithful when idle),”

    First of all, nice article since you did the actual work and measurements. But how the hell do you come up with this statement and assume Coolthreads has an advantage based on your assumption that you can shave off 55 watts of power? You can’t just guess at these kind of things, you must measure it and you could have measured it. But RAM sitting idle doesn’t use that much power, we’re talking about and extra 3 watts per DIMM at idle and assuming you can remove 6 dimms (assuming 8 total dimms) you might recover 18 watts. One hard drive uses about 8 watts. That means you might drop power consumption by 26 watts. Also bear in mind that Woodcrest servers are way faster than this and use even less power. If a slow 2 socket Opteron can do this, a 2 socket Woodcrest system would murder the coolthreads system.

  31. Ed W.
    August 15, 2006 at 8:40 pm

    You really ought to try the worker mpm for Apache – when you have a machine like the t1000 this is a threading monster. holy context switches batman!

    Most of the reason why the default setting for Apache (prefork) is because of people using PHP and other software packages that aren’t threadsafe. If you use threadsafe packages you won’t believe the difference in performance.

    Go back, and recompile apache with mpm=worker on both boxes and see what that number does for you.

    Another suggestion for you – find a java (like JBoss) app server and a benchmark application. Java application servers are very thread-intensive, and that of course is Sun’s baby. Peak throughput on single-threaded stuff will go to the x86 processor pretty much every time, but just see the load you can toss at a j2ee app on Sun hardware.

    If you do happen to run j2ee tests, be sure to install the native threading libraries – much greater performance there.

    If you happen to install Solaris (10 is good stuff), there are compilers included on the cds, but it’s not part of the default install. don’t bother with the make and cc that are in /usr/ccs. To avoid all the hassle, go to sunfreeware.com, and get gcc from them.

    I totally agree that sun hardware is way freaking expensive. try buying a video card from them – they charge like 750 bux for what is essentially a radeon 7500.

  32. Jon E
    August 15, 2006 at 8:45 pm

    Dude .. learn solaris and some better tools .. if you really need the GNU tools that aren’t under /usr/sfw and some of the berkley style basics under /usr/ucb and can’t be bothered to learn the other Solaris equivalents (prstat, mpstat, pfiles, pstack, etc) and extra richness .. get blastwave going:
    # /usr/sfw/bin/wget http://blastwave.org/pkg_get.pkg
    # pkgadd -d ./pkg_get.pkg
    # /opt/csw/bin/pkg-get -a
    # /opt/csw/bin/pkg-get -i top

    Oh .. and you might want to think about a better compiler .. remember the old compiler that got stripped out of SunOS 4.1 when Solaris rolled around? Well it used to cost a pretty penny over the years for RTUs but was pretty well developed and expanded on over the years .. now it’s a free install over at developer.sun.com .. just follow the link for Sun Studio Compilers .. loads of associated tools and excellent documentation (works on linux too) ..

    Lastly you might want to consider a web server that’s better threaded. The linux threading model is still pretty rough and by testing apache you’re simply testing fork/exec calls .. so really you’re just testing how quickly you can copy address spaces while context switching .. you do realize that the Sun Web Server (think Netscape derivative) is also a free download .. you might want to bench this against your olde faithful ubuntu/apache rig ..

  33. August 15, 2006 at 9:05 pm

    Remember, we weren’t benchmarking Apache – we were benchmarking our application.

    Recompiling for Apache worker won’t help because A) Apache wasn’t doing any CPU, our scripts were, and B) our scripts aren’t thread-safe. 🙂

  34. Yusuf
    August 15, 2006 at 9:16 pm

    Don, Don’t you keep a reverse proxy in front of your Apache and then turn keep-Alive off. This would reduce the number of Apache processes
    It’s pointless tieing up a heavy Apache process talking to clients. An event driven reverse proxy with epoll/event ports would be very useful

    Also, you used ‘ab’. Apache developers themselves call ‘ab’ dreadful and recommend using Flood

    http://journal.paul.querna.org/articles/2005/07/05/response-to-debunking-lighttpd

  35. Tom Dickson
    August 15, 2006 at 9:47 pm

    Why not unplug/remove the additional RAM and HDD and run the test again? Also, did you install Ubuntu on your other box, too? It wouldn’t be fair to have the difference be caused by a Linux distro difference.

  36. Karoly Negyesi
    August 15, 2006 at 10:53 pm

    Guys, this game was over before it started. If you have an application which can run run on a cluster and most web apps are such beasts, then you need to compare the performance of that Niagara machine to two of those Opteron boxen or even three of a bit smaller ones. And the reliability too… if I have 2-3 web frontends and one goes south, oh well, life goes on. And, I can buy cheaper Opterons if I do not want redundant PSU or cooling — which I definitely do not want, I do not care about a single server, I care only about my app. Anyone heard of Google?

  37. Miguel Reimer
    August 16, 2006 at 12:37 am

    Google? The ones spending all those extra dollars on energy for their cheap/inefficient pieces of hardware? And that’s not even mentioning how much more they are paying for cooling.

    Google’s biggest expense right now is electricity as all those machines consume monstrous amounts of energy, and a lot of that energy is just used to generate heat.

  38. August 16, 2006 at 2:56 am

    @Miquel: True, but it does not look like the T1000 does safe Power in this case?

    I asume a lot of real world applications scale (as badly) as the real world application of Don.

    Greetings
    Bernd

  39. August 16, 2006 at 7:11 am

    (Disclaimer: I work for Sun)

    According to the SPECWeb2005 benchmark ( http://www.spec.org/web2005/results/web2005.html ), a single T1000 is around 10% faster than two Opteron 885 (four cores at 2.6GHz). Of course this is with Solaris 10 and Sun’s web server…

    You can find some Apache tunning tips for the T1000 at http://www.sun.com/servers/coolthreads/tnb/applications_apache.jsp

    Apache 2.2.0 uses PortFS which is known to trigger bug 6367349. Please use Apache 2.0.55 for now. Also, 2.2.0 is known to not perform well on Niagara.

    The AB benchmark should be run with a line like this “./ab -n 10000 -c 40 -k http://niagara_hostname/index.html.ee
    -n 10000 is the total number of requests to be made per test against Apache.
    -c 20 is the number of concurrent requests issued each time through out the test.
    -k means keep the connection alive
    index.html.ee is a static page file to be returned by Apache

    This way you will get the most load possible to the server. Also please whenever possible try tu run the same benchmark with 16GB of RAM on both servers, as the biggest performance that most customer reports is running out of memory before getting the CPU usage high enough…

    Good luck!

  40. me
    August 16, 2006 at 8:46 am

    For those concerned there’s no Compilier w/ Solaris 10. You just have to download it seperately. It’s free too. Sun Studio 11.

  41. Patrick
    August 16, 2006 at 9:07 am

    Yep, Bernd.
    But now is the time to get this changed. What kind of performance gains are people with badly scaling software going to get in the future, when additional ressources are only available through adding cores or cpus?
    This is a good reading for people not yet convinced that they should sharpen their concurrent-programming-saw
    http://www.gotw.ca/publications/concurrency-ddj.htm

    Greetings,

    Patrick

  42. jon
    August 16, 2006 at 10:34 am

    @Zero

    “Single threaded, it takes about as long as an iBook G4@1.0GHz to compile our code (Java).”

    Probably because your compiler is single threaded. Think about it.

    “And then there’s the fact that Solaris is just such a pain… still haven’t got it nicely talking to Windows servers using SMB. ”

    Wow. Just wow. If you seriously can’t install the samba package, edit the config file, add the users and start the service, no wonder you can’t figure out why you’re having performance problems. Good lord.

  43. Walter Moore
    August 16, 2006 at 7:50 pm

    /usr/sfw/bin/gcc

    /usr/sfw/bin/gtar

    /usr/sfw/bin/make

    prstat -a

    now you know enough to use Solaris.

    Also – ALOM has an excellent https interface (with console redirection) – that’s probably the most useful way to use it.

    DHCP being off by default really bugs me though.

  44. Walter Moore
    August 16, 2006 at 7:54 pm

    Also, we are using an 8-core T2000 as a postgres server — this thing is a beast! It replaced a stack on Intel boxes and has tons of head room. Solaris is really, really happy on this platform. You said your sweet spot was 32 active processes, but with all the wait time in database services our box is happy out to hundreds of active postgres connections.

    This was not a very good or useful review of the hardware..

  45. August 17, 2006 at 9:46 am

    You need to load the boxes tell the throughput starts to go down or response time becomes to long. If you do not, then you are not testing the T1 to its capabilities. I have done simular testing and the T1 always wins out. You also need to look at the mpstat while you run the tests. If you are not able to load the system up with 1 process you might need to run multiple instances of the process, else you will have a half idle cpu.

    You need to push both boxes tell they fall down. Only then will you find out what the boxes are capabable of.

    Peter

  46. August 17, 2006 at 11:39 am

    Disclaimer: I too work for Sun

    Thanks for taking the time to write up a review. At a minimum it at least gives us all a starting point for conversation.

    That said I think some misconception spread need to be corrected in a revision of your review. I think the lack of experience with Solaris has lead you to state many things which are inaccurate and deservere some form of correction. As many have mentioned many GNU tools (gcc, tar, make, etc) are available out of the box – you may need to edit your path at most to access these things. I don’t think just because Solaris doesn’t operate just like Linux should be taken as a negative (although we are doing things to minimize differences since we know we have to cater to various trends). Granted Solaris isn’t perfect and there and tons of things we need to address to make it easier for people to start taking advantage of Solaris more quickly but given the potential benefits I think you’ll see the investment is well worth the time.

    I highly recommend you spead some time reading the materials at

    http://developers.sun.com/prodtech/solaris/learning/new2solaris/index.html

    I know finding a few hours can be a PITA but it really will help you move the ball down field. Once you’ve done that check out this page

    http://www.sun.com/servers/coolthreads/tnb/applications.jsp

    to get a good sense of the various documented tuning tips people have used to get the most out of running apps on US T1 based systems. Given what you’ve said thus far I suspect there are improvements that can be made regardless if you stick with Ubuntu or try out Solaris. Also as others have said I suspect without any changes you could still crank things up and see the T1000 scale while the opteron box will start to become unresponsive.

    To address what others have said about Ubuntu vs Solaris – I see no problems running Ubuntu if that’s what you are comfortable with. A potential perfomance gain with Solaris may not justify the additional admin expense so I have no issues with a choice like that. On top of that a series of very in depth reviews of running Linux on US T1 based systems (now in production) can be read here

    http://www.stdlib.net/~colmmacc/category/niagara/page/
    http://www.stdlib.net/~colmmacc/category/niagara/page/2/
    http://www.stdlib.net/~colmmacc/category/niagara/page/3/
    http://www.stdlib.net/~colmmacc/category/niagara/page/4/
    http://www.stdlib.net/~colmmacc/category/niagara/page/5/
    http://www.stdlib.net/~colmmacc/category/niagara/page/6/
    http://www.stdlib.net/~colmmacc/category/niagara/page/7/

    This guy has gotten tremendous performance out of his T2000/Ubuntu setup so to me that’s a good thing for Sun and just gives us more motivation to make Solaris as friendly to the masses as we can.

    I hope this helps….

  47. perfgeek
    August 18, 2006 at 4:53 pm

    wrt the power consumption, you should be able to pull-out half the RAM and remeasure the power. You can then use the delta to figure watts per DIMM. You won’t be able to pull a full 12 GB ot of the thing because one is required to do ram in groups of four DIMMs (8 on the T2000) so if you have 16GB, that suggests you have 8 x 2GB DIMMs, which means you can pull four of the DIMMs and be at 8GB. To get to only 4GB of RAM in the thing you would have to change-out the DIMMs for either four, 1GB’s or eight 512MB’s

    as far as the power consumed by the disc, its OEM probably has specs online somewhere.

    wrt SPECweb2005, two of the three measures which go into the metric involve SSL. the details should be in SPEC’s docs on SPECweb2005. IIRC Banking is all SSL, Ecommerce is “mostly” SSL and “support” is no SSL at all. sometimes it can be interesting to look beyond the SFM(single figure of merit) at the component benchmarks. the SSL stuff is likely helped by the added HW support for a few of the operations involved in RSA. certainly Sun’s webserver knows how to use it – or more accurately their in-kernel acclerator does (if one checks the fine-print of the SPECweb2005 disclsoures) how much other software can make use of it will likely depend.

    wrt http keepalive and ab – if the idea is to simulate the production load, then keeping the connection alive in ab may not be the most correct thing to do. the connection churn (establlish and disconned) should match what production does. now, if one simply wants a synthetic measure then by all means consider keeping keepalive at maximum.

  48. August 18, 2006 at 11:50 pm

    We went thru this exercise with a T2000 a couple months ago with our own server (OpenLDAP). It was no contest, the Opteron system is significantly faster at every load point, from one thread out to 32 threads, regardless of whether running Solaris or Linux (tho Linux on the Opteron was even faster yet).
    http://www.connexitor.com/blog/pivot/entry.php?id=49

    We didn’t bother to measure power consumption. Suffice to say, the T2000 was one of the loudest boxes in our machine room. After our eval period was up, we returned the T2000. I’ll keep the Opteron, thanks…

  49. bill
    August 19, 2006 at 4:46 am

    ok, ok, we know solaris has gcc and that top sucks and that you can get better crap from SunFreeware.com – stop telling us that. read the freaking thread before you add your dribble.

  50. August 19, 2006 at 9:55 am

    comparing x86/Linux vs. SPARC/Linux is lame. go Solaris. learn to install it the way you can use it, it’s easy.

  51. ra
    August 20, 2006 at 10:53 am

    What a load of crap this review is, the reviewer is clearly not objective and lacks knowledge of both Sun hardware and Sun software and then blames Sun for it.
    Really sad.

  52. August 20, 2006 at 2:10 pm

    I know plenty of people have mentioned the Solaris vs Linux issue and I’m afraid I’m going to be another one.

    If you wander over to http://www.spec.org and have a look at the results there you’ll see some performance oddities which explain why you need to evaluate the performance of the sort of app you are running against multiple OSs and compilers. If you look at Sun’s results for various SPEC CPU benchmarks they report figures in some cases for the same hardware (some Opteron systems) running Solaris/Sun Studio and Linux/Various compilers. The Solaris systems score higher (15% in some cases) in SPECfp whilst the Linux systems score marginally higher (

  53. LGB
    August 23, 2006 at 5:05 am

    Well, just because Linux admins are familiar with ‘top’ it does not mean that this is the only good way. Solaris come with utility called ‘prstat’. I’ve just meet with a Solaris admin asking why ‘that crap Linux does not contain prstat, it’s a must for an OS’ 🙂 Also, Solaris uses some kind of different story when speaking about path of executables, for example it has /usr/sfw for free stuffs (thus /usr/sfw/bin instead /usr/bin, /usr/sfw/include instead of /usr/include, etc), also try to check /usr/ccs and /usr/xpg4. Ok you may say: ‘that’s stupid’. However a Solaris guy may say that having everything in /usr/bin _IS_ stupid 🙂 Solaris often contains different versions of the same tool for various standards, you can adjust your PATH variable to select or not select some. This can be also treated as ‘strange’. However please note again that strange is relative notion, way of Linux is also called strange by Solaris experts, I think 😉 So you should not blame Sun nor Solaris just because it does things different than Linux.

    No, I’m not a Solaris expert, though I’m administrating several Solaris systems. Yes, I know Linux better too. But I don’t think that Linux is ‘better’ or its behaviour is superior when compared to Solaris and vice versa.

  54. Mihai Maties
    August 23, 2006 at 11:03 am

    Hi,

    According to Sun’s published results – http://www.sun.com/servers/coolthreads/t1000/benchmarks.jsp#l, a 1 UltraSPARC T1 processor (8 core @1ghz version) with 16 GB memory configuration draws 188watts of power vs. the 193watts measured by Don .

    Sun’s power calculator (for T2000 – http://www.sun.com/servers/coolthreads/t2000/calc/index.jsp – usual disclaimers apply) allows one to estimate the idle/max power ratings in watts. A few examples (no peek taken at the JavaScript)
    1. 4x1GB DIMMs – 6/14
    2. 16x1GB DIMMs – 24/54
    3. 4x2GB DIMMs – 11/23
    4. 8x2GB DIMMs – 22/46 (vs 24/54 watts for 16x1GB)
    5. 1x73GB 2.5″ 10K RPM SAS – 8/11
    6. 1x450W supply, 12V power – 24/30
    7. 1x550W supply, 12V power – 55/69 (seems less efficient than the 450W supply; wattage varies slightly for Coolthreads@1.2Ghz)

    With a 450W supply
    8. Fixed components + Coolthreads@1Ghz – 127/140
    9. Fixed components + Coolthreads@1.2Ghz – 132/146

    The power measurements for the T2000 tests show consumptions of between 309 to 329 Watts – I could not quickly figure out what might have caused the variation other than
    1. 32GB of memory – this could make a 45Watt difference
    2. the disk subsystem neededfor Oracle db storage, logging, archiving etc. My guess is that Sun could not afford – performance wise – to only use the (at most) 4 internal disks and must have used some external storage requiring 1+ PCI(X) adapters rated 3watt a piece – while lowering at the same time the wattage used for internal disk storage – at least two rabbits in one shot.

    The fineprint at the bottom states how the power rating estimates were produced for some of the competing systems – “estimated by calculating 70% of the power supply reported in the Quick Specs”.

    Sun technical docs give the typical (180watt) and maximum (220watt) consumptions for the T1000 with a 300watt power supply – the 70% rule applies to them as well but for the maximum power drawn, not for the typical.

    IMHO Sun might have produced in some cases best-case performance/watt gains as they used maximum consumption for their competitors and the actual measured consumption (very close to typical values) for their own systems – they would have still won the comparisons.

    Given the increasing importance of the performance/watt factor, one might be able to systematically get measured power consumption figures along with the benchmark results.

    Oh… the T1000 (6 cores only) is right now at 3,595 $ (http://store.sun.com/CMTemplate/CEServlet?process=SunStore&cmdViewProduct_CP&catid=148202&PROMO)

  55. Dave Tong
    September 7, 2006 at 11:17 am

    Another thing to consider is heat. A system that uses less power should produce less heat and thus require less air conditioning.

  56. January 11, 2007 at 1:04 pm

    Hi! Very nice site! Thanks you very much! 8q8hYALSWlvB

  57. photobug
    February 8, 2007 at 11:01 am

    Hi Don,

    I’m long-time DGrin’ner (referred by Fish, early on) and SmugMug subscriber, who raves about SmugMug’s incredible responsiveness and customer service to anyone who will listen ;-). I also happen to receive paychecks from Sun.

    The results you saw on the T1000 were not at all what I would have expected. The T1000/T2000 servers are the first systems Sun has shipped with *highly*-threaded (32-way) CPUs. I understand that one of the key issues with them is that customers don’t tend to see right out-of-the-box what those servers are capable of because the OS (Solaris, and possibly the Linux distros) don’t yet come pre-configured to take advantage of what the Niagara processor can do. The initial Solaris configuration settings are more appropriate to, say, a dual-threaded (dual-core, single-thread) UltraSPARC IV processor than the 32-thread (8-core, 4-thread) Niagara processor.

    Sun is defintely working to remedy the out-of-the-box OS configuration issue. In the meantime, Sun provides software called “CoolTune” as part of the Cool Tools package that will ask you a few questions and set up all the *right* Solaris configuration parameters for you, for much better performance after that.

    I agree, many of the tools (esp Linux ones) aren’t immediately available on the command line when you see that first Shell prompt. It’s not that they are missing; their directory just isn’t on the default $PATH. Arguably, they should be … but that aside, I think someone else already told you that many of them can be found in /usr/sfw/bin.

    Secondly, did you get any help from Sun’s PAE group to uncover what the issues were? You might have found a significant improvement if they were involved. (I’ve heard that you did have some contact with that group re some other Sun gear you were evaluating)

    I was so jazzed when I heard that SmugMug was considering (and as you announced in another blog, actually chose) Sun gear. Way cool. I suspect you’ll find some other Sun solutions that will be great fits at SmugMug, as time goes on ;-). I’m sure I’m not the first to offer, but if you’re not getting what you need, drop me a line and I’d be happy to help connect you up with the right folks at Sun.

  58. MichaelDZH
    March 28, 2007 at 9:29 pm

    Hi. I find forum about work and travel. Where can I to see it?
    Best Regards, Michael.

  59. October 5, 2007 at 3:59 pm

    Your ignorance is stunning. Or is it just typical Linux-ites’ ‘linux is the best thing since sliced bread’ prejudice against real unices? All that gnu/freeware stuff you were looking for is in /opt/sfw/bin or /usr/sfw/bin, and the equivalent to ‘top’ is ‘prstat’. (you really should keep up with the real unix scene). Now I have a question for you: If linux is so superior to solaris, where is its equivalents to the (absolutely essential) pstack, ptree, psig, pcred, pflags, pldd commands in solaris? 🙂 Awaiting your response. JG

  60. paul
    January 12, 2008 at 7:39 pm

    Rough power consumtion:
    HDD: 10W idle, 20-25W when transfering data
    RAM: 10W _each_ DIMM, no matter what size; i.e. 40 W one bank of four.

    On the one hand, your comparison is more than a joke. How can you be so ignorant and don’t even invest an evening to get the basics of Solaris?
    I’m pretty sure that Solaris on that machine would be _much_ faster at your sweet spot, because the IP stack was lifted up. Plus you get administrative and debugging tools (e.g. DTrace) that you can’t have with Linux.
    On the other hand, you’re right in that “your app is your benchmark”.
    At least you should be aware that you benchmarked the L2 cache throughput. Is this workload really your daily business?

  61. January 12, 2008 at 9:14 pm

    @paul:

    Clearly, you haven’t read through the thread or the follow-up posts. Here’s a brief summary:

    I set up a 2nd T1000 in our datacenter with Solaris on it, so we had one with Solaris and one with Linux. Then I went over to Sun and sat in a room for an entire day with the high-performance computing team and we fiddled with both servers.

    The Linux one outperformed the Solaris one even after all the Sun guys had worked on it. Further, our AMD boxes outperformed both.

    I believe we ran into limitations on network interrupts, rather than anything CPU related, but regardless – this fairly common workload fell down on T1000.

    I believe Niagara2 would perform much better since it has on-board 10GigE NICs, but we haven’t done any testing yet.

  62. December 7, 2008 at 2:49 am

    The anti-ineffectualness analgesic Viagra may lend a hand salvage people with burly dystrophy from an originally ruin, a survey suggests.

    experimentationers establish the way the narcotize works to feud inefficacy may also eschew minor off quintessence non-starter in robust dystrophy patients.

    Tests on mice with a type of the murrain showed the numb lend a handed remain their nucleuss working well.

    The Montreal kindness start chew over show ups in Proceedings of the ist Academy of Sciences.
    It is high-level to reminisce over that benefits seen in animals do not each time interpret into e pharmaceutical
    Dr Marita Pohlschmidt
    husky Dystrophy offensive

    husky dystrophy is a genetic circumstances causing wasting of the muscles.

    The beginning signs of burly litalents seem at cruelly age five, peerless to a dynamic defeat in the cleverness to hike by the age of 13.

    People with the circumstances are also at a higher hazard of nucleus damp squib due to a weakening of the muscles which preserve the unit pumping stiffly.

    For this purpose, various people with Duchenne burly dystrophy – the most unexceptional ality of the circumstances – die in beginning man being, time in their 20s or 30s.

    Blood emanate

    The Montreal band establish that Viagra – known technically as sildenafil – prhonestts the defeat of a molecule, cGMP, which plays a key job in preserveing blood vessels dilated.

    In the penis, this increases blood gush, and handss to quarrel inefficacy.

    But in the nucleus it lend a hands to certain the unit itself receives a becoming stock of blood, and remains nourishing and obstinate.

    With the hub in a obstinate prepare, it is more skilled to suffer the thrust of weakening muscle cells caused by robust dystrophy.

    Viagra works by blocking an enzyme, PDE5, which breaks down cGMP.

    Professor Jean-Claude Tardif, president of the Montreal basics unitization investigate sympathy, said: “These speculative results inform on us confidence that one day it desire be accomplishable to regale with this access cardiac problems in patients with burly dystrophy, and possibly on the level handle other hub murrains.”

    The examinationers also inserted a gene that increased cGMP forging in the mice’s nucleus cells, and initiate that this eschewed the animals to vindicate routine cardiac function.

    Dr Marita Pohlschmidt, boss of delving at the athletic Dystrophy competition, said the analyse was interesting.

    nevertheless, she added: “It is high-level to reminisce over that benefits seen in animals do not each time interpret into forgiving medication.

    “Although this is optimistic, it is noiselessness totally originally days and we look forcheck to back examination that drive evidence the smash it power clothed for people with burly dystrophy.”

    Tags:

  63. December 8, 2008 at 1:09 am

    marketing leads

  64. January 5, 2009 at 11:40 pm

    I have to disagree with that last comment…doesn’t make sense

  65. January 5, 2009 at 11:40 pm

    I have to disagree with that last comment…doesn’t make sense

  66. November 15, 2009 at 10:15 am

    LOL! these are great!

  67. gfhgfh
    January 9, 2010 at 5:52 am

    MTS Converter is an excellent mts conversion software that can convert mts files to other video and audio formats. In addition, The MTS Converter can convert mts, ts, trp hd video files perfectly and quickly.

  68. gfhgfh
    January 9, 2010 at 7:17 am

    MTS Converter is an excellent mts conversion software that can convert mts files to other video and audio formats. In addition, The MTS Converter can convert mts, ts, trp hd video files perfectly and quickly.

  1. August 15, 2006 at 3:36 pm
  2. August 16, 2006 at 2:08 pm
  3. August 17, 2006 at 11:31 am
  4. August 18, 2006 at 4:00 pm
  5. August 26, 2006 at 5:37 am
  6. August 30, 2006 at 12:25 am
  7. August 30, 2006 at 6:57 am
  8. September 13, 2006 at 4:38 am
  9. October 5, 2006 at 10:00 pm
  10. January 22, 2007 at 8:34 pm
  11. February 7, 2007 at 6:06 pm
  12. March 23, 2007 at 3:42 pm
  13. April 11, 2007 at 4:57 pm
  14. December 13, 2007 at 11:05 am
Comments are closed.