sources/talk/20190604 Data center workloads become more complex despite promises to the contrary.md
5.4 KiB
Data center workloads become more complex despite promises to the contrary
The data center is shouldering a greater burden than ever, despite promises of ease and the cloud.
Data centers are becoming more complex and still run the majority of workloads despite the promises of simplicity of deployment through automation and hyperconverged infrastructure (HCI), not to mention how the cloud was supposed to take over workloads.
That’s the finding of the Uptime Institute's latest annual global data center survey (registration required). The majority of IT loads still run on enterprise data centers even in the face of cloud adoption, putting pressure on administrators to have to manage workloads across the hybrid infrastructure.
[ Learnhow server disaggregation can boost data center efficiency | Get regularly scheduled insights: Sign up for Network World newsletters ]
With workloads like artificial intelligence (AI) and machine language coming to the forefront, that means facilities face greater power and cooling challenges, since AI is extremely processor-intensive. That puts strain on data center administrators and power and cooling vendors alike to keep up with the growth in demand.
On top of it all, everyone is struggling to get enough staff with the right skills.
Outages, staffing problems, lack of public cloud visibility among top concerns
Among the key findings of Uptime's report:
- The large, privately owned enterprise data center facility still forms the bedrock of corporate IT and is expected to be running half of all workloads in 2021.
- The staffing problem affecting most of the data center sector has only worsened. Sixty-one percent of respondents said they had difficulty retaining or recruiting staff, up from 55% a year earlier.
- Outages continue to cause significant problems for operators. Just over a third (34%) of all respondents had an outage or severe IT service degradation in the past year, while half (50%) had an outage or severe IT service degradation in the past three years.
- Ten percent of all respondents said their most recent significant outage cost more than $1 million.
- A lack of visibility, transparency, and accountability of public cloud services is a major concern for enterprises that have mission-critical applications. A fifth of operators surveyed said they would be more likely to put workloads in a public cloud if there were more visibility. Half of those using public cloud for mission-critical applications also said they do not have adequate visibility.
- Improvements in data center facility energy efficiency have flattened out and even deteriorated slightly in the past two years. The average PUE for 2019 is 1.67.
- Rack power density is rising after a long period of flat or minor increases, causing many to rethink cooling strategies.
- Power loss was the single biggest cause of outages, accounting for one-third of outages. Sixty percent of respondents said their data center’s outage could have been prevented with better management/processes or configuration.
Traditionally data centers are improving their reliability through "rigorous attention to power, infrastructure, connectivity and on-site IT replication," the Uptime report says. The solution, though, is pricy. Data center operators are getting distributed resiliency through active-active data centers where at least two active data centers replicate data to each other. Uptime found up to 40% of those surveyed were using this method.
The Uptime survey was conducted in March and April of this year, surveying 1,100 end users in more than 50 countries and dividing them into two groups: the IT managers, owners, and operators of data centers and the suppliers, designers, and consultants that service the industry.
Join the Network World communities on Facebook and LinkedIn to comment on topics that are top of mind.
作者:Andy Patrizio 选题:lujun9972 译者:译者ID 校对:校对者ID