Saturday, February 27, 2010

Data Center World, Spring 2010

PTS Data Center Solutions will be presenting and exhibiting at this spring’s Data Center World Event, held in Nashville from March 7-11. Data Center World is the largest global event of its kind and has been named one of the 50 fastest growing tradeshows in the U.S. It is the leading educational conference for data center professionals.

Our team will host roundtable discussion on Information Technology Infrastructure Library (ITIL) & ITSM Metrics Programs for the data center. This presentation will take a nuts and bolts approach to setting up an ITSM metrics program and will discuss how this process will allow IT to present data to senior management.

We’re also hosting a product information session, titled “Data Center Maintenance Management Software - Computerized Maintenance Management for the Data Center”, during which we’ll demonstrate how you can use best-in-class solutions to more effectively manage support infrastructure. The presentation will discuss Computerized Maintenance Management Systems (CMMS) and present our new Data Center Maintenance Management Software (DCMMS) Solution. This innovative software application from PTS Data Center Solutions allows the user to manage assets and parts, estimate and manage maintenance costs, track recurring problems to pinpoint those that may lead to more critical issues, and generate work orders with the details needed to properly perform preventative maintenance.

In addition, I’d like to invite you to visit us at booth #739 where you can get a first-hand look at our specially designed DCMMS solution. To learn more, please contact Amy Yencer at AYencer@PTSdcs.com (201-337-3833 x128).

To register for the event, please visithttp://www.datacenterworld.com/. See you in Nashville!

What is your definition of a "Green" Data Center solution?

Is your organization looking for "Green" Data Center Solutions or are you looking to incorporate "Green" into your Data Center Design in 2010 or in this decade? Below are some thoughts on this important issue in regards to building "Green" Data Centers. We're interested in hearing your opinions & ideas as well.

For the most part “Green” Solutions for the Data Center, is in my opinion a bit of an oxymoron, because most supposed “Green” solutions still have a carbon footprint & typically use power generated by fossil fuels in the Data Center industry. We also find that rarely are Data Center owners & operators willing to reduce availability to improve the efficiency of their Data Center. That being said, our design philosophy is to design “Greener” Data Center Infrastructure technologies where possible into any proposed new builds, renovations and upgrades for Data Center facilities.

In our opinion the 1st step towards “Greener” Data Centers is collecting accurate measurements & trending the environmentals in your Data Center facilities so we can model proposed changes & fine tune the efficiency. PTS has been running several monitoring & management tools in our own Data Center facility as well as in our clients Data Centers for several years. We use this base knowledge as well as industry best practices & PTS’ proven trade secrets during an engagement for design to propose “Greener” solutions where applicable & in line with the rest of the key design criteria for a project. In our experience many “Green” solutions such as solar & hydro power rarely can make a impact to a Data Center Design, however using water or air side economizers to take advantage of the free cooling days available in an applicable climate can provide a reasonable ROI while “Greening” the Data Center.

In addition by eliminating air mixing in Data Centers, we reduce the power consumed by the HVAC systems supporting a Data Center, and we prove these savings in cooling through CFD modeling before making investments. ASHRAE has widened the temperature range in the new TC9.9 recommendations for Data Center operations, but before we embrace this “Greener” standard and go maximizing the set points for supply & return air; we first must make sure that air mixing has been eliminated as much as possible because as we raise set points, “hot spot” issues & inefficiencies will be amplified, second raising set points reduces the availability of the Data Center so we have to make sure that any proposed increases in set point to make a Data Center “Greener” are in line with the availability requirements established for the Key Design Criteria of a project, third many servers fans will spin faster as the intake temperature rise so there is an inflection point where raising set points further will not continue to lower power consumption.

Focusing on effective Data Center capacity management is key to any "Greening" initiative:
• Better predictability of space, power, and cooling capacity and redundancy limits means more time to plan on ways to mitigate their affect
• Increased real-time availability of IT operations as a result of an enhanced understanding of the present state of the power and cooling infrastructure and environment
• Reduced operating cost from energy usage effectiveness and efficiency as well as operator effectiveness from the use of automated tool sets

In our experience most facility oriented "Greener" solutions provide only a fraction of the efficiency gains found in IT focused solutions such as: server consolidation, virtualization & data deduplication. That's not saying we shouldn't consider the facility oriented "Greener" solutions, especially if they fall in line with our design criteria & ROI needs, but it is saying we should focus on the IT side 1st because of the greater savings & our capacity requirements will be appropriately defined if we become IT efficient first.

Why are so many still using guesswork to determine their needs for power?

It is 2010 & so many data center & IT managers are still relying on manual derated name plate calculations to manage the power required throughout their power chain even though many of these data centers are on the verge of running out of power & many have experienced outages due to tripped circuits. So many data center & IT managers come to us looking for real-time monitoring of power, many solutions are evaluated, but few ever get implemented. I'm trying to figure out why many are not investing in real-time power management.

If you read the Green Grid's white paper "Proper Sizing of IT Power and Cooling Load" it discusses the fluctuations in IT power draw due to inlet temperature changes, server component changes, virtualization, etc.http://www.thegreengrid.org/en/Global/Content/white-papers/Proper-Sizing-of-IT-Power-and-Cooling-Loads

I don't think we can underestimate the potential danger in using derated nameplate information to calculate power requirements. Unvirtualized servers typically use 15% of the processing power, virtualized we see #'s in the 60-95% range of processing utilization, this directly correlates to #'s closer to nameplate values as the Green Grid pointed out in the white paper. Most IT organizations are rapidly adapting virtualization technology to consolidate and operate more efficiently at the same time, which is a good thing, but it is putting rapid pressure on previously underutilized power infrastructures in data centers.

With so many variables to account for how can one depend on derated calculation tools? With so many real-time tools available to measure & trend power accurately including; branch circuit monitoring, outlet level monitored power strips, in-line power meters, IPMI and extensive software options why are so many still trying to use derated calculations to guesstimate the power they'll need for higher density virtualized deployments? This guesswork leads to potential circuit breaker trips & designed inefficiencies throughout the entire power chain. I am amazed with rising power costs, less power capacity available and so many looking to operate a more efficient "greener" data center footprint that so few are investing in real-time power monitoring tools that will allow them to plan & manage capacity effectively.

Considerations for Storage Consolidation

The growth of company files, e-mail, databases, and application data drives a constant need for more storage. But with many networks architected with storage directly attached to servers, growth means burdensome storage management and decreased asset utilization. Storage resources remain trapped behind individual servers, impeding data availability.

There are three storage consolidation architectures in common use today:
  • direct-attached storage (DAS),
  • network-attached storage (NAS), and
  • the storage area network (SAN).

DAS structures are traditional in which storage is tied directly to a server and only accessible at that server. In NAS, the hard drive that stores the data has its own network address. Files can be stored and retrieved rapidly because they do not compete with other computers for processor resources. The SAN is the most sophisticated architecture, and usually employs Fibre Channel technology, although iSCSI-based technology SANs are becoming more popular due to their cost effectiveness. SANs are noted for high throughput and their ability to provide centralized storage for numerous subscribers over a large geographic area. SANs support data sharing and data migration among servers.

So how do you choose between NAS, RAID and SAN architectures for Storage Consolidation? Once a particular approach has been decided, how do you decide which vendor solutions to consider? There are a number of factors involved in making a qualified decision including near and long term requirements, type of environment, data structures, budget, to name a few. PTS approaches Storage Consolidation by leveraging our proven consulting approach:
  • to gather information on client needs,
  • survey the current storage approach, and
  • assess future requirements against their needs and the current approach.

Critical areas for review and analysis include:
  • Ease of current data storage management
  • Time spent modifying disk space size at the server level
  • Storage capacity requirements to meet long term needs
  • Recoverability expectations in terms of Recovery Time Objectives and Recovery Point Objectives
  • Needed structuring of near- and off-line storage for survivability and ease of access to data
  • Security needed to maintain data storage integrity
  • Evolving storage complexity if current architecture is maintained
  • New applications considered for deployment
  • Requirement to provide Windows clustering
  • Interest in considering Thin Provisioning
  • Storage spending as a percentage of total IT budget
PTS reviews all of the items above, and more --- we then design the best storage architecture for both near and long term requirements and are able to source, install and manage leading edge storage solutions from companies such as Dell and Hitachi.

Ultimately, Storage Consolidation positively impacts costs associated with managing your IT network in terms of redundancy, disaster recovery, and network management. It also allows for a more secure network, free from wasted assets tied to particular servers or data center components. Finally, the tasks of provisioning, monitoring, reporting, and delivering the right storage services levels can be time consuming and complex and Storage Consolidation will enhance your ability to manage your organization's data storage.