How to get cloud cost reporting under control

You have to standardize on a set of descriptive tags to identify the assignment of resources to cost centers

Although vendor-written, this contributed piece does not promote a product or service and has been edited and approved by Network World editors.

Salary Survey 2016: How does your compensation stack up?
A recent report by RightScale says 71% of companies surveyed have adopted hybrid cloud, up from 58% year-over-year, but concern over cloud costs has risen to 26% from 18% three years ago. If you’re struggling to gain control over your cloud cost accounting, there’s no time like the present to address it. Solving this issue isn’t necessarily difficult when it’s tackled early, but left to languish the amount of support and technical debt you incur can become insurmountable.

At the core of this problem is the need to gain greater clarity into the breakdown of your spend based on actual usage, allowing for greater understanding of direct costs and a more granular means of managing it. This clarity comes with a catch – your ability to standardize on a set of descriptive tags to identify the assignment of resources to cost centers within your business.

Whether you’re just starting off in cloud or you’re trying to get your cost reporting under control, there are some pretty simple steps that will help you cut through the confusion and concern.

Get organized and keep it simple

For most Infrastructure as a Service (IaaS) solutions, when you begin to deploy resources you’re able to add some level of descriptive tags to help identify and organize them in a meaningful way. For most cloud cost management tools, these tags associated with resources are the dimensions on which you’ll be able to directly segment cost. As you’ve probably guessed, harnessing this tagging structure is the first critical component in identifying where your cloud costs stem from.

If you’re utilizing an Amazon Web Services (AWS)-native, tag-based solution, you should first turn on Detailed Billing Reporting with Resources and Tags. Enabling this will provide you with the core data set you can start digging into. Then there are three big decisions you need to make in terms of how you apply this to your environment:

You need to balance how you’re going to use tags (Note: tags in AWS are limited to 10 per object). Balancing functional and cost allocation usage of tags takes some finesse. Keep in mind that developers and infrastructure engineers may want to use tags for things like clustering and service discovery, so monopolizing all 10 tags in AWS for billing purposes may impact their approach to these problems and force them to implement a non-optimal solution.
You need to know what you’re able to tag and how that affects your usage of those resources (i.e. S3 buckets can be tagged, but not objects). This may also affect how you use these services once you decide how you’re going to segment costs.
You need to decide when resources need to be tagged. Some organizations unknowingly destroy untagged or incorrectly tagged resources automatically as a part of their governance routines. Be wary of this when you’ve begun this process to avoid undoing your efforts to segment costs.

Other 3rd party products offer more intricate ways of identifying resources beyond tagging, including leveraging Amazon Resource Name (ARN) path-based hierarchies that can achieve a more flexible and less limited structure. While it’s a much more flexible means of building out a cost structure hierarchy, it also requires both a prescriptive or automated approach. Using both approaches is key in launching resources. To ensure paths are set properly, also incorporating a 3rd party tool to aggregate billing and utilization data or to simply extract and present the data (more on this in a bit) is critical.

Make it easy to do the right thing
Most organizations rely heavily on this billing-level data for accounting and chargeback purposes. In working to ensure that you have a high level of accuracy in the application of tags or a resource-based allocation strategy, you have to both define a standard and then provide the means to apply it properly. When you’re allowing multiple people to launch instances or create resources in AWS, keeping this level of standardization can be tricky. This leads to a decision on whether resources outside of the segmentation model should be summarily deleted or not.

The key to compliance with a cost accounting resource segmentation structure isn’t really the carrot or the stick, it’s all about a comfortable pair of shoes. By this I mean that if you want resources to be deployed in a manner that requires significant attention to detail, you don’t necessarily need to give people positive reinforcement (a party for the team with the best compliance) or negative reinforcement (deletion of untagged resources). You do, however, need to carve out the best path possible for moving forward. Solving this problem lies in building out templates that represent the units of work that your organization needs to deploy—from instances to workloads—within the Infrastructure as Code solution of your choice.

If you’re on AWS, AWS Service Catalog allows you to build custom AWS CloudFormation templates. These templates apply tags or set paths desired based on input parameters that enable the desired setup with the right amount of variability. If you’ve embraced Infrastructure as Code fully, there are other options, including managing deployment through Chef, Puppet or Terraform. These platforms make it possible to further integrate deployment templates with other backend governance or even external cost management tools.

It’s a best practice to have reports that clearly define where your costs exist in a given deployment or even across your enterprise. If you don’t use the data beyond cost chargeback, you’re not maximizing its full potential and value. One of the great things about having data segmented around cost in an on-demand compute cost structure is that business units are prompted to ask questions they weren’t considering previously. Examples of these are:

Is it really worth $’x’ to run 15 different environments?
Can I pay less for less performance where I don’t need it?
If I pay more (scale up), can I avoid having to refactor/redevelop a part of my application?

Some of these business-level questions presented above have an underlying consideration of cost-to-value vs. a simple cost consideration. At this point, being able to include other sources of data in the overall analysis becomes critical in order to identify a relative cost as compared to performance (system or business-level). For web-scale applications, understanding the relative cost per user and being able to tie the cost of infrastructure services to client delivery or new client acquisition is just one of the capabilities that are driving innovation in the cost management market.

For steady-state and legacy workloads, the ability to easily lock-in AWS cost optimizations with Reserved Instances is a boon to traditional IT organizations looking for guidance and recommendations as they get on board with cloud. In the case of hybrid deployments (private data center + cloud), these third party tools can act as a great way to distill the enormity of data that’s available into actionable concerns. This small consideration tends to reduce the confusion and frustration of managing the cloud into a much more manageable package.

Based on everything that’s out there to help you get moving with a successful cost reporting strategy for your cloud deployments, it’s possible to satisfy your financial curiosity while also adding value to the business. While this all sounds pretty daunting for someone just starting out, get moving now and iterate over time.

McClory has been writing code, managing DevOps, and designing scalable application infrastructures for more than ten years. As COO and CTO of DualSpark, Patrick served as an expert Amazon Web Services consultant, helping clients capitalize on the benefits of cloud architecture and design with modern and innovative software, infrastructure, and automation strategies leveraging solutions from AWS. After the acquisition of DualSpark by Datapipe, McClory assumed the role of SVP of Platform Engineering and Delivery Services. To learn more about Datapipe, visit Datapipe.com.

 

Click here to view complete Q&A of 70-465 exam
Certkingdom Review

MCTS Training, MCITP Trainnig

Best Microsoft MCTS Certification, Microsoft 70-465 Training at certkingdom.com

 

Leave a Reply

Your email address will not be published. Required fields are marked *