July 30, 2007
Energy Efficiency The Real Story
I always knew label ratings of the energy used by mainframes were max configurations that did not reflect what was really happening on the raised floor. Now that z9's can be measured by actuals it is very easy to see that the energy efficiency is really there. I looked at a z9 EC model S54 and saw when 10 temporary engines were added it used just a little more than 1% more KW. The overall actual measurement before and after tha addition of 10 engines was always less than 12 KW. I am going to work in August to get more facts and real numbers out to the world.
by Dave Anderson | July 30, 2007 in Energy Efficiency Permalink | Comments (4) | TrackBack (0) |
July 05, 2007
Amdahl's Law and Emerging Tech ...
There is a paradigm shift coming to the world of computing (with tremendous impact on data centers) over the next few years. Ok, that is the easy part, now I have to explain myself. The following technologies will dramatically alter (disrupt!!!) computing as we know it:
1. multi-socket multi-core commodity computers in blade form factors
2. 10GbE networking with less than 5us ping-pong latency
3. high density cost effective DIMMs (8GB,16GB and 32 GB)
4. virtualization
Ok, so how does this apply to Amdahl's law? Let's not use the strict definition, but instead use it in a conceptual sense (this is how most programmers use it). If you break a system into parts A and B and A dominates the performance then improvements in B have little to no discernible impact. However, if you make a massive improvement in A then the improvements in B become much more relevant. In fact, it is very possible to reverse the polarity and find a situation where B is now the performance bottleneck.
Now, lets keep the above in mind when we think of most high end computing especially in the messaging arena. One of the big issues is the constant need to reduce latency. Latency, by definition, has two components, process and link. 1GbE networks are dominated by high ping-pong latency (75 - 100 us) and thus hide relatively high process latency. With 10GbE networks delivering 3.5 - 15 us (based on current NIC cards) of ping-pong latency the game suddenly changes. With 10GbE networks, process latency becomes the new battlefield.
How do we attack process latency. With CPU speeds flattening, the only way to achieve higher levels of performance (less process latency) is to take advantage of all available computing power. CPU are increasingly providing this power in the form of multiple cores as opposed to increased in clock frequency. By the end of 2008, 8 core CPUs will be commodity. Imagine that you now have a 2 socket computer with 16 cores (8 cores per CPU). All of a sudden your ability to exploit parallelism is real, i.e. you have enough real CPUs to get the job done. With the combination of advances in OS schedulers, lock-free coding techniques and large amounts of RAM there is a real opportunity to push the limits of parallel performance without the interconnect as a roadblock.
Computer systems need balance and the emergence of 10GbE, multicore CPUs and large amounts of cost effective RAM provide a better solution that at any point in the past 15 years. A whole new programming paradigm will emerge and this transition is so big that many new entrants will have an opportunity to get into the game. Legacy code that has limped along riding Moore's law without much innovation will finally be up for true competition. This means that lots of enterprise software vendors that have enjoyed "lock-in" success over the past decade will have to revamp their codes or face the real possibility of getting put out to pasture. Transitions create opportunity and this transition is like no other in recent history.
There is so much that I have not even yet touched upon. All of the items listed above are somewhat linked. Let me conclude (before this gets too long) on the role of virtualization.
Virtualization is another way to exploit mutli-socket mulit-core computes. However, to provide virtualization at scale you needs lots of Bandwidth, RAM, Cores and low latency. How does this apply to Data Centers. If the basic commodity computing platform provides a cost-effective and full performance user experience for desktop virtualization then expect data centers to get a lot more crowded. With the emergence of various VM techololgies such as VMWare, Xen, KVM (Kernel-based Virtual Manager) and Microsoft's Veridian there is plenty of evidence that competition will advance the state of the art very quickly.
Lastly, the only topic I did not hit was RAM. By the end of this year, you will see new products come to market that dramatically change the price curve for high density DIMMs. New methods of stacking memory chips will enable the production of cost effect high density DIMMs that change the way buyers think about how much memory should provisioned. By the end of 2008 think about having a commodity computer with 16 cores and 64 GB of memory costing somewhere between $10-20K.
So, what is the point. The main point I want you to walk away with is that if you are building a data center either now or in the near term then be aware that you need to think ahead. The importance of a data for both server and desktop computing will become more important over the next decade. Deploying old technology and not understanding how close we are to a real transition would be a huge mistake. Be extremely mindful of the role the network will play. Invest in the network and lots of new possibilities open up especially as commodity compute power creates a powerful substrate for virtualization technology to reach its full potential.
by Jeffrey M. Birnbaum | July 5, 2007 in Design Permalink | Comments (1) | TrackBack (0) |
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of The Raised Floor Weblog.