After reading the Greenpeace, Renewable Energy, and Data Centers blog entry from my colleague James Hamilton a couple of weeks back, I took a look at the Greenpeace report on data center power consumption and noted that it’s pretty unusual for an environmental report to not feature energy conservation as a primary evaluation criteria.
It seems to me that any analysis of the climate impact of a data center should take into consideration resource utilization and energy efficiency, in addition to power mix. Carbon emissions are driven by three items: the number of servers running, the total energy required to power each server, and the carbon intensity of energy sources used to power these servers. Using fewer servers and powering them more efficiently are at least as important to reducing the carbon impact of a company’s data center as its power mix. I thought it would be interesting to run the numbers on this and take a look at how these three factors interact when it comes to overall carbon emissions from compute activity.
I’ll get to the math in a minute but what I ended up with is the following:
On average, AWS customers use 77% fewer servers, 84% less power,and utilize a 28% cleaner power mix, for a total reduction in carbon emissions of 88% from using the AWS Cloud instead of operating their own data centers.
Let’s take a closer look at these numbers to get a better sense of the efficiency and power conservation gains that are possible through cloud computing.
Cloud Customers Consume 77% Fewer Servers
Let’s first look at server utilization and the number of servers required to support a given group of workloads. On-premises data centers typically have fairly low server utilization rates. This is because companies can’t afford to run out of server capacity. Without sufficient capacity, applications fail, sales don’t get completed, customers don’t get served, and critical business data doesn’t get tracked. Servers and related IT resources are required for the company to maintain high service quality through peak load periods.
This peak capacity is only rarely used and, consequently, average server utilization levels are often under 20%. In contrast, large-scale cloud infrastructure operators have a much larger pool of customers and applications allowing them to smooth out peaks and run at much higher overall utilization levels. In addition, innovations that are made possible by the scale and dynamic nature of the cloud, such as the EC2 Spot Market, help to drive utilization even higher and lead to additional efficiency improvements.
The 2014 Data Center Efficiency Assessment from the NRDC has cloud server utilization at 65% and on-premises utilization running 12 to 18%, which is consistent with other estimates I’ve come across over the years. So, with approximately 65% server utilization rates for the typical large-scale cloud provider versus 15% on-premises, it means that when companies move to the cloud, their applications can be supported using only 23% of the server resources, so this means they typically provision fewer than 1/4 of the servers that they would on-premises. This alone is a material gain — but there are significant power efficiency differences as well!
Cloud Customers Consume 84% Less Power
A common measure of infrastructure efficiency is Power Usage Effectiveness (PUE). This is the total power that is delivered to the server, storage, and networking in a data center (this is called critical power), divided by the total power that is brought to a data center (this is called total power). The difference between total power and critical power is the power lost in data center power distribution, cooling and, to a lesser extent, lighting and other power-consuming overhead items. Lower is better when looking at PUE.
The annual Uptime Institute survey has found average data center PUE to be 1.7 (Industry Average Data Center PUE Stays Nearly Flat Over Four Years). Large-scale cloud providers run at scale and invest deeply in efficiency since, at scale, these investments can have real and rapid paybacks. Some megascale operators report PUE numbers as low as 1.07. Google reports an impressive PUE of 1.12. Some of the smaller cloud providers may invest less in efficiency improvements so I’ll use a more conservative 1.2 as the cloud industry average PUE, with the understanding that some operators including AWS do run more efficiently.
Using this data, we have a prospective customer moving from on-premises to a cloud deployment going from an average PUE of 1.7 down to 1.2, which means, for like-powered servers, the power consumption in the cloud would be 29% lower than on-premises data centers.
So, if you multiply the impact – 77% fewer servers required (i.e. cloud requires only 23% of the number of servers required for the same workloads) by 71% more efficient servers, customers only need 16% (23% x 71%) of the power as compared to on-premises infrastructure. This represents an 84% reduction in the amount of power required.
To put this into perspective, National Resources Defense Council (NRDC) estimates that total US data center power consumption was 91 billion kilowatt hours (kWh) in 2013. If all of the workloads in these data centers were migrated to the cloud, we would see a reduction in annual power consumption of more than 76 billion kWh. That would be equivalent to the combined annual residential power consumption of the states of New York and Kentucky.
Cloud Customers Reduce Their Carbon Emissions by 88%
The massive improvement in energy efficiency drives a huge reduction in climate impact because less energy consumed means fewer carbon emissions. The climate impact improvements get even better when you factor in that the average corporate data center has a dirtier power mix than the typical large-scale cloud provider.
A popular way to look at the climate impact of power mix is carbon intensity (grams of carbon emissions per kWh of energy used). Using data from the International Energy Agency report Key World Energy Statistics 2014, the global power source average is 545 grams kWh.
As a cloud example, the June 2015 AWS average power mix carbon intensity is 393 grams/kWh. Measured this way, large-scale cloud providers use a power mix that is 28% less carbon intense than the global average.
Combining the fraction of energy required (16%) with the fraction of carbon intensity of power mix (72%), you end up with only 12% of the carbon emissions. This represents an 88% reduction in carbon emissions for customers when they use AWS vs. the typical on-premises data center.
To show just how large of an impact energy efficiency plays here versus power mix, let’s take a look at how carbon emissions change if we adjust the power mix. This would never happen, but cloud providers could have a power mix that encompassed 6 times the carbon of on-premises datacenters and still achieve the same net carbon impact of on-premises data centers. That’s how much more energy efficient cloud computing is than on-premises datacenters given the factors mentioned above!
Working Toward 100%
AWS remains focused on working towards our long-term commitment to 100% renewable energy usage. In the last year, we’ve taken several significant steps to achieve this goal, including teaming with Pattern Development to build and operate the 150 megawatt Amazon Wind Farm (Fowler Ridge) in Indiana.
In May 2015, we updated our Sustainable Energy webpage to announce that the AWS global infrastructure is powered by approximately 25% renewable energy today, and that we expect to reach 40% by the end of 2016. We have several additional developments planned in the next 12 -18 months to help us get there and encourage our customers to check back on our sustainability page often to watch our progress.
The environmental argument for cloud computing is already surprisingly strong and I expect that the overall equation will just continue to improve going forward.
— Jeff;
No comments:
Post a Comment