December 20, 2008
Internet Downtime - 2008's Major Incidents
pingdom.com has a interesting review of the "top 10" Internet downtime incidents in 2008. Everything from severed underwater cables, fires, denial of service attacks, to data center power outages, etc. Good lessons all. Wishing you only "uptime" in 2009.
by Will Runyon | December 20, 2008 in SysAdmins Permalink | Comments (0) | TrackBack (0) |
October 21, 2008
The mainframe for mid-sized businesses
An IBM mainframe priced at $100,000? Yes, you read that right. The newest System z mainframe, the z10 Business Class, is a follow up to last February's launch of the z10 Enterprise Class, and is priced and featured for mid-sized companies.
According to InformationWeek, "The system is priced at less than $100,000, making it affordable for
companies in developing nations. IBM is offering zero-interest,
zero-payment financing on the system for the first 90 days. The z10 offers users big opportunities for server
consolidation. It holds the capacity of up to 232 x86 servers within a
footprint that's 83% smaller. One company that plans to use the system is Transzap, a
provider of electronic payment services for the oil industry. 'We're a
small company but our transaction data volumes are growing upwards of
100 percent, annually,' said Transzap CEO Peter Flanagan."
Caption: Created for mid-sized businesses, the IBM z10 BC simplifies commercial computer operations with "specialty engines" to run popular business and consumer applications (email, website hosting, transaction processing, etc) on one of the world's most trusted and secure computer platforms. IBM co-op student Sean Goldsmith surveys the new z10 BC mainframe in IBM's Poughkeepsie, NY, plant to add an extra 1,000 email users with the energy of a 100 watt light bulb. Goldsmith, a senior at Marist College, anticipates a bright future with the mainframe.
CRN also reported that "IBM is working with more than 130 solution providers and systems integrators worldwide who are certified to sell IBM System z mainframes. The certification of IBM System z sales and technician skills has increased 300 percent during the first half of 2008 compared with the same time period in 2007. IBM expects about 70 percent of z10 BC sales to go through IBM's solution providers."
by Will Runyon | October 21, 2008 in Design, Energy Efficiency, Power & Cooling, SysAdmins, Virtualization Permalink | Comments (49) | TrackBack (0) |
September 02, 2008
Batman, Iron Man . . . Meet Green Data Center Man
by Will Runyon | September 2, 2008 in Design, Energy Efficiency, Power & Cooling, Services, SysAdmins Permalink | Comments (0) | TrackBack (0) |
July 22, 2008
Greenmonk: Data centers as energy exporters, not energy sinks!
Tom Raftery at Greenmonk recently published a thoughtful post titled Data Centers as energy exporters, not energy sinks! Tom's post includes quotes from Intel's Nick Knupffer and Steve Sams at IBM on progress being made to reduce heat at the chip level.
Tom reports . . . "However, according to the video below, which I found on YouTube, IBM are going way further than I had thought about. They announced their Hydro-Cluster Power 575 series super computers in April. They plan to allow data centers to capture the heat from the servers and export it as hot water for swimming pools, cooking, hot showers, etc. This is how all servers should be plumbed."
by Will Runyon | July 22, 2008 in Design, Energy Efficiency, Power & Cooling, SysAdmins, Who's Who Permalink | Comments (0) | TrackBack (0) |
June 05, 2008
EPA Seeks Input on Data Center Energy Consumption
At the recent Uptime Institute Symposium in Orlando, the U.S. Environmental Protection Agency's (EPA) Andrew Fanara explained how their Energy Star Program is implementing a National Data Center Energy Efficiency Information Program in coordination with the U.S. Department of Energy (DOE) via the EPA's Energy Star web site. Managers of data centers can complete a series of forms that will be used to measure server energy use, power and cooling requirements, etc. Hat tip to Matt Stansbury at SearchDataCenter.com.
Watch more from Andrew Fanara.
by Will Runyon | June 5, 2008 in Assessments, Design, Energy Efficiency, Power & Cooling, SysAdmins, Who's Who Permalink | Comments (0) | TrackBack (0) |
October 08, 2007
Big Blue Going Green
When you click on a link, a server in a datacenter somewhere gets the job of finding the web page or process you requested and delivering it to your browser over the Internet. One user on the Internet and one server at the other end serving one web page is quite trivial. With millions of users around the world visiting the web site at unpredictable times and making unpredictable requests for millions of documents, pictures, music, videos, processes and transactions, it can become a nightmare for the people who are managing the datacenter. In the last five years there has been a six-fold increase in computing capacity and a 160 fold increase in storage. Along with the increase in capacity comes a huge increase in complexity and in electrical power usage.
Imagine looking through a window into a corporate datacenter (even though many of them are underground and have no windows) and you would see thousands of steel boxes mounted in six-foot-high racks with cables everywhere. This part of the problem has been addressed by new technology called virtualization, pioneered by IBM decades ago but greatly refined in recent years. (See "Virtually Real or Really Virtual"). Imagine a virtual datacenter. When you peer through the window you see three boxes -- a server, a disk storage device, and a network card. There is a person at a large video console who is looking at what appears to be a dashboard. It shows a pictorial diagram of all the things going on in the datacenter. When one application area needs more server, storage, or network capacity the virtual datacenter automatically re-allocates capacity from another application area that currently has excess capacity. The virtual datacenter keeps resources balanced, and when a component fails, the virtual datacenter automatically allocates a spare or underutilized component to take over. Virtual environments allow a big reduction in complexity but the even bigger problem is the huge growth in electrical power. In many cases companies are not able to get the additional power they need either because the power company does not have the capacity or because the datacenter is not designed to accommodate the physical changes necessary. Even if the power was readily available there is a negative impact on the environment. Hence, Big Green.
IBM is redirecting $1 billion per year across its businesses, mobilizing the company’s resources to dramatically increase the level of energy efficiency in IT. The plan includes new products and services to enable IBM clients to sharply reduce data center energy consumption and make them more “green”. The problem is sizable. Big companies spend tons of money on power. In IBM's case it is a half billion dollars per year. The priority has been on getting the servers and storage that are needed to achieve various business results -- need another feature for the web site, throw in another server. Have growth in web visitors -- throw some more servers at it.
IBM is leading by example. One of their "green" projects is consolidating 3,900 servers onto 30 new top of the line mainframe servers. The result is not only more compute power but dramatically less use of electrical power and space. One of IBM's customers went from 300 servers to six. The University of Pittsburgh Medical Center consolidated 1,000 servers onto 300 and saved $20m in costs while freeing up datacenter space for more hospital beds.
Datacenters have been popping up everywhere -- most of them built before 2001. The datacenters are very large rooms full of many different kinds of equipment -- designed in the same way they were decades ago -- like a kitchen where the stove puts out more heat so you turn on the air conditioning to cool down the entire room. The chef is comfortable and others in the room are freezing. IBM is designing datacenters for customers where cooling "zones" are specific to the type of equipment in each zone. Green datacenters not only save space and energy but also benefits the environment overall. In the past the electric bill has been allocated as overhead to all parts of the company. Redesigns are saving many millions of dollars. With the huge growth of energy for the IT infrastructure the CFO is reallocating energy expenditures from general overhead to the CIO so they can see what IT is really costing.
IBM has made a sizeable consulting business out of helping customers understand their energy usage and then designing and supervising the building of new Datacenters and cooling equipment. Having overseen the construction of thirty million square feet of advanced space, IBM has learned a lot. The virtualization is helping a lot too. It can now optimize the use of servers around energy use. For example, as workload declines, perhaps at night, servers can be virtualized and "moved" to underutilized servers and then automatically turn off the servers that are not needed for a few hours.
(See other IBM Happenings)
by John R Patrick | October 8, 2007 in Design, Energy Efficiency, Power & Cooling, Services, SysAdmins, Virtualization Permalink | Comments (3) | TrackBack (0) |
May 21, 2007
Virtualization Vital in Driving Data Center Efficiency
As server power consumption has increased eightfold and the installed base continues climbing more than 15% annually, IDC estimates that by 2010, roughly 35 million servers will be installed in data centers around the globe. This "installed base boom" is no doubt leading to the server gap in data center power and cooling capabilities. At the same time, the boom provides us with an opportunity to come together to explore new ways to increase power performance, improve productivity and efficiency, generate new capacity for future growth and, oh yeah, reduce costs. In the end, we'll be contributing to a greener planet.
We can get started by consolidating the footprint of physical servers in play today and virtualization is vital in making that happen. While technical in nature, virtualization is really a fairly easy concept to comprehend. From a systems standpoint, systems use energy and give off heat whether they are in use 100% of the time or just 15% of the time. Server virtualization allows a physical server to be partitioned to run multiple secure virtual servers, reducing server quantity and related costs/energy used.
California-based Pacific Gas and Electric Company (PG&E) is adopting virtualization to lead the way by transforming its San Francisco, Fairfield and Diablo Canyon IT operations into energy-efficient data centers. The company has adopted a Mobile Measurement Technology (MMT) System to measure temperature distributions in data centers that is heling PG&E consolidate virtualize servers to reduce energy consumption. And from a system utilization standpoint, PG&E expects utilization to increase from 10 percent of capacity to more than 80 percent.
PG&E is also the first energy company to offer incentives to spur virtualization technology adoption, enticing customers to dismantle under-utilized computing data and storage equipment. The program reimburses PG&E customers up to 50 percent of the costs of a server consolidation project – including software, hardware and consulting – up to a maximum of $4 million per customer. Kudos to PG&E for taking a leadership role in promoting the virtualized data center.
Is your company driving innovative data center solutions we could benefit from? Let's use The Raised Floor as our platform for staying abreast of the some of the best.
by Mike Desens | May 21, 2007 in Design, Energy Efficiency, SysAdmins, Virtualization Permalink | Comments (1) | TrackBack (0) |
May 08, 2007
The Data Center Energy Crisis
Welcome to The Raised Floor blog, a group-authored discussion about today's and tomorrow's data centers. Please share your comments and come back often.
As I talk to customers around the world about their data centers it's obvious they're in crisis and in many cases are not really sure what to do.
This crisis appears to be a mismatch between requirements and capabilities. Let me give you some examples that reflect the trends I've seen from reading a number of the consulting studies over the last few years.
On the requirements side, to meet application demands and the regulatory requirements (Sarbanes Oxley, HIPAA, Basel II) customers are installing more and more technology. Over the last 10 years, one estimate is that the server install base has grown by 6X and the storage install base has grown by 69X (UPDATE: See correction below.)
If these numbers are close to being true, then data centers must be having problems trying to keep up with demand.
In my view, this demand on the data center has three flavors:
Technology demands - How do I install new dense technologies like Blade Servers in a data center that was never designed to support them? Technologies like blade servers are inexpensive, flexible, scalable and generally significantly more energy and cooling efficient than their predecessors. So expect to continue to see demand for these technologies skyrocket.
Demands for increased energy use - How do I get enough power and cooling to support my technology needs into my data center? Growth rates like those mentioned previously are putting huge pressures on the existing data center infrastructures that were built a number of years ago. One estimate is that more than 80% of current data centers were built prior to 2001. Gartner just published an opinion that any data center more than five years old is obsolete.
Demands for increased expense - Take the growth curve for servers and storage and turn it into a growth curve for energy use, and then multiply this logarithmic increase in energy use with increasing energy cost. I don't know about your location, but the cost of energy has been growing at double-digit rates where I live. The impact on IT expense is significant. Power may now be 30-40% of the IT operations budget, if energy is actually charged out to users based on real costs.
If these demands appear to reflect your data center environment, then what should we all be doing?
CORRECTION: The projected 6x growth in servers and 69x growth in storage is expected to occur between the years 2000 and 2010.
by Steve Sams | May 8, 2007 in Assessments, Design, Energy Efficiency, Power & Cooling, Services, SysAdmins, Virtualization, Who's Who Permalink | Comments (8) | TrackBack (0) |
The postings on this site are our own and don’t necessarily represent the positions, strategies or opinions of our employers.
© Copyright 2005 the respective authors of The Raised Floor Weblog.