Skip to main content
Cloud Cost Engineering

Multi-Cloud Accounting

Ravinder··6 min read
Cloud CostFinOpsAWSMulti-CloudFOCUSGCPAzure
Share:
Multi-Cloud Accounting

Multi-cloud is a reality before it is a strategy. A team adopts GCP for BigQuery. Another picks Azure for Active Directory integration. The data team runs Databricks on Azure. Now you have three billing formats, three sets of discount models, and three definitions of what "compute" means. The bill cannot be read unless it is first translated into a common language.

Why Normalization Is Hard

Each cloud provider uses different terminology for the same concepts:

Concept AWS GCP Azure
Virtual machine EC2 instance Compute Engine VM Virtual Machine
Commitment discount Savings Plan / RI Committed Use Discount Reservation / Savings Plan
Egress DataTransfer OUT Egress Bandwidth
Object storage S3 Cloud Storage Blob Storage
Billing granularity Hourly (CUR) Daily (BigQuery export) Daily (EA export)
Cost field UnblendedCost cost PreTaxCost

Before FOCUS, every team built their own mapping tables. Those mappings drifted with every cloud provider schema update.

The FOCUS Specification

The FinOps Open Cost and Usage Specification (FOCUS) is an open standard from the FinOps Foundation. It defines a canonical schema for cloud billing data across providers.

Key FOCUS columns:

FOCUS Column Meaning
BillingAccountId Provider account (AWS Account, GCP Project, Azure Subscription)
ServiceName Normalized service (Compute, Storage, Database)
ResourceId Provider-specific resource identifier
EffectiveCost Amortized cost with commitments applied
ListCost On-Demand / list price equivalent
ChargeType Usage, Purchase, Adjustment, Tax
ChargeCategory Cloud, SaaS, etc.
Region Normalized region identifier
UsageQuantity / UsageUnit Normalized usage amount and unit

AWS, GCP, and Azure have all published FOCUS-compatible exports. The spec is at v1.1 as of early 2026.

Building a FOCUS Normalization Pipeline

flowchart LR subgraph Sources["Cloud Sources"] AWS[AWS CUR S3 Bucket] GCP[GCP Billing BigQuery] AZ[Azure EA CSV Export] end subgraph ETL["Normalization Layer"] AWX[AWS → FOCUS transformer] GCX[GCP → FOCUS transformer] AZX[Azure → FOCUS transformer] end subgraph Storage["Unified Store"] DL[Data Lake - Parquet] AT[Athena / BigQuery unified table] end subgraph Reporting["Reporting"] DASH[Multi-cloud Dashboard] ALERT[Budget Alerts] SHOW[Showback Reports] end AWS --> AWX --> DL GCP --> GCX --> DL AZ --> AZX --> DL DL --> AT --> DASH AT --> ALERT AT --> SHOW

Python transformer — AWS CUR to FOCUS:

import pandas as pd
from datetime import datetime
 
def transform_aws_cur_to_focus(cur_df: pd.DataFrame) -> pd.DataFrame:
    """
    Transform AWS CUR columns to FOCUS 1.1 schema.
    Handles the most common line item types.
    """
    focus = pd.DataFrame()
 
    focus["BillingAccountId"]   = cur_df["bill_payer_account_id"]
    focus["BillingAccountName"] = cur_df["bill_payer_account_id"]  # enrich from Org API
    focus["SubAccountId"]       = cur_df["line_item_usage_account_id"]
    focus["Provider"]           = "AWS"
    focus["InvoiceIssuerName"]  = "Amazon Web Services"
 
    focus["ServiceName"]        = cur_df["product_product_name"]
    focus["ServiceCategory"]    = cur_df["product_product_family"].map({
        "Compute Instance": "Compute",
        "Storage":          "Storage",
        "Database":         "Databases",
        "Data Transfer":    "Networking",
    }).fillna("Other")
 
    focus["ResourceId"]         = cur_df["line_item_resource_id"]
    focus["ResourceName"]       = cur_df["line_item_resource_id"]
    focus["Region"]             = cur_df["product_region_code"]
 
    focus["ChargeType"] = cur_df["line_item_line_item_type"].map({
        "Usage":                     "Usage",
        "Fee":                       "Purchase",
        "Credit":                    "Adjustment",
        "Refund":                    "Adjustment",
        "Tax":                       "Tax",
        "SavingsPlanCoveredUsage":   "Usage",
        "SavingsPlanRecurringFee":   "Purchase",
        "RIFee":                     "Purchase",
    }).fillna("Usage")
 
    focus["ChargeCategory"] = "Cloud"
 
    # Amortized cost is the FOCUS EffectiveCost
    focus["EffectiveCost"] = cur_df.apply(lambda r: (
        r.get("savings_plan_effective_cost", 0)
        if r.get("line_item_line_item_type") == "SavingsPlanCoveredUsage"
        else r.get("reservation_effective_cost", 0)
        if r.get("line_item_line_item_type") == "DiscountedUsage"
        else r.get("line_item_unblended_cost", 0)
    ), axis=1)
 
    focus["ListCost"]          = cur_df.get("pricing_public_on_demand_cost", focus["EffectiveCost"])
    focus["BilledCost"]        = cur_df["line_item_unblended_cost"]
    focus["UsageQuantity"]     = cur_df["line_item_usage_amount"]
    focus["UsageUnit"]         = cur_df["pricing_unit"]
 
    focus["ChargePeriodStart"] = pd.to_datetime(cur_df["line_item_usage_start_date"])
    focus["ChargePeriodEnd"]   = pd.to_datetime(cur_df["line_item_usage_end_date"])
 
    # Propagate cost allocation tags
    for tag_col in [c for c in cur_df.columns if c.startswith("resource_tags_user_")]:
        focus_tag = tag_col.replace("resource_tags_user_", "x_aws_tag_")
        focus[focus_tag] = cur_df[tag_col]
 
    return focus

Unified Athena Query Across Clouds

Once all three exports land in S3 as FOCUS-normalized Parquet:

-- Monthly spend by provider and service category
SELECT
  Provider,
  ServiceCategory,
  DATE_TRUNC('month', ChargePeriodStart)  AS month,
  ROUND(SUM(EffectiveCost), 0)            AS effective_cost_usd,
  ROUND(SUM(ListCost), 0)                 AS list_cost_usd,
  ROUND(
    100.0 * (1 - SUM(EffectiveCost) / NULLIF(SUM(ListCost), 0))
  , 1)                                    AS effective_discount_pct
FROM unified_billing.focus_v1
WHERE ChargePeriodStart >= DATE_TRUNC('month', CURRENT_DATE - INTERVAL '3' MONTH)
  AND ChargeType = 'Usage'
GROUP BY 1, 2, 3
ORDER BY month DESC, effective_cost_usd DESC;

This query works identically regardless of which cloud the data came from. That is the value of normalization.

FOCUS Adoption Status (2026)

timeline title FOCUS Specification Timeline 2023 : FOCUS 0.5 published by FinOps Foundation 2024 : FOCUS 1.0 GA — AWS, Azure, GCP announce support 2025 : FOCUS 1.1 — adds SaaS and network cost columns : AWS native FOCUS export GA in Cost and Usage Reports 2026 : Azure FOCUS export in preview : GCP FOCUS export GA : Major FinOps tools (Apptio, CloudHealth) adopt FOCUS as primary format

AWS FOCUS export is now available natively in the CUR v2 setup. Enable it from the billing console:

resource "aws_cur_report_definition" "focus" {
  report_name                = "focus-v1-export"
  time_unit                  = "HOURLY"
  format                     = "Parquet"
  compression                = "Parquet"
  additional_schema_elements = ["RESOURCES", "SPLIT_COST_ALLOCATION_DATA"]
  s3_bucket                  = aws_s3_bucket.cur.bucket
  s3_region                  = var.region
  s3_prefix                  = "focus/"
  report_versioning          = "OVERWRITE_REPORT"
  additional_artifacts       = []
 
  # Enable FOCUS format (CUR v2)
  # Set via console toggle "Enable FOCUS-compatible export" — Terraform support varies by provider version
}

Key Takeaways

  • Multi-cloud cost data is incomparable without normalization; spreadsheet-based comparisons that use each provider's native format produce misleading conclusions.
  • The FOCUS specification is now a practical standard, not a roadmap item — AWS and GCP have native exports and major tooling vendors support it as a primary format.
  • EffectiveCost (amortized with commitments applied) is the correct field for cross-cloud trend analysis; BilledCost distorts commitment periods.
  • Building your own FOCUS transformer is a one-time investment of roughly two weeks; every FinOps analysis built on top of it becomes cloud-agnostic at no additional cost.
  • A unified Athena or BigQuery table across clouds enables a single SQL dialect for all billing queries — remove the per-cloud tool dependency and the learning curve that comes with it.
  • Treat FOCUS adoption as infrastructure work, not analytics work; it belongs in the data platform team's roadmap alongside the CUR pipeline, not in an ad-hoc analytics project.
Share: