Mapping Technology Resources and Costs to Solutions
A Multi-Pass Approach to Extending Your TBM Model
Mapping costs from Resource Towers to Solutions is how you turn supply-side spend into business-facing insight (Solution TCO, unit costs, and decision support). This guide is written for TBM administrators and mirrors the style of your Cost Pools, Budget Mapping, and Resource Towers pages. It presents three maturity paths—Crawl, Walk, Run—with step‑by‑step instructions, callouts, and acceptance criteria so first‑time practitioners can execute with confidence.
Prerequisites
• Financials mapped to Cost Pools/Sub‑Pools
• Costs attributed to Resource Towers/Sub‑Towers
• A basic Solution catalog (IDs, names, owners)
Tool-Agnostic Guidance
The solution mapping approaches on this page are designed to be platform-agnostic. They outline general processes that can be adapted to most commercial TBM tools, and are equally suitable for implementation in spreadsheets or basic databases.
Whether you’re using enterprise software or building your model manually, these mapping strategies are structured to be accessible and scalable.
Key Concepts
Solutions (TBM v5.0)
The business‑facing layer where technology costs roll up for TCO and value conversations. In v5.0, Solutions include products and services (applications support Solutions but are not Solutions themselves).
Solution Type / Category / Name / Offering
The TBM Solutions hierarchy. Type → high‑level grouping; Category → sub‑group; Name → the canonical Solution; Offering → optional, finer‑grained packaging under a Solution (tiers/bundles) used only when you have reliable drivers.
Service / Solution Catalog
Authoritative list of Solutions (and Offerings, if used) with stable IDs, owners, status. If no catalog exists, seed from TBM v5.0 taxonomy and relevant extensions, then validate with stakeholders.
Resource Towers / Sub‑Towers
The supply‑side structure organizing technology resources and operating functions that form the cost base to be mapped to Solutions.
Cost Pools / Sub‑Pools
Financial classification by nature of spend (e.g., Staffing, Hardware). Cost Pool mapping precedes Tower→Solution mapping.
Allocation vs. Attribution
Allocation spreads cost from a source (Tower/Sub‑Tower) to multiple Solutions using a driver (users, tickets, GB‑months). Attribution ties a specific resource (server/VM/DB/cloud asset/device) directly to one Solution.
Cost Drivers & Fallbacks
Measurable quantities used to allocate cost (e.g., CPU hours). Define a primary driver per category with an explicit fallback ladder (metering → CI counts → user‑based split).
Environment Policy
Decide whether Dev/Test is included in Solution TCO or reported separately; tag consistently.
Shared / Unassigned
Transparent buckets for costs/resources not yet mapped to a Solution; keep visible with a plan to reduce over time.
Data Needs
Before you choose a mapping path, inventory the data you already trust. The real difference between Crawl, Walk, and Run isn’t in the math you apply—it’s in the quality, coverage, and stability of your inputs. A simple allocation with high-trust data is far more valuable than an elaborate model built on weak or incomplete data.
Crawl – Minimum viable inputs
At this level, you need only:
- Tower/Sub-Tower cost for a given period
- A defensible driver per Sub-Tower (tickets, users, % splits)
- A validated Solution catalog with IDs and owners
Example: Service Desk tickets by business unit are enough to allocate Sub-Tower cost directionally across three or four Solutions.
Pitfall to avoid: Don’t invent precision. If no usage drivers exist, keep allocations simple and transparent.
Walk – Usage-informed inputs
Build on Crawl by adding relationship or usage signals:
- CMDB/CSDM relations (Solutions → apps/services → CIs)
- APM portfolios (mapping applications to Solutions)
- Telemetry (CPU-hours, GB-months, egress traffic)
These inputs let you convert measured usage into allocation percentages that better reflect reality.
Example: If your storage system tracks GB-months by application, you can proportionately allocate storage Sub-Tower cost across consuming Solutions.
Quick Tip: Only elevate a Sub-Tower to Walk when usage data is clean and complete—otherwise keep it at Crawl and log what’s missing.
Run – Resource-level inputs
At this level, you attribute costs directly to specific resources:
- Instance-level identifiers and tags (servers, VMs, DBs, containers, cloud assets, end-user devices)
- Enforced solution_id or relationship tags across systems
- Optionally, Offering identifiers if you have reliable tier/bundle drivers
Example: A tagged AWS account (solution_id=ERP) can be attributed 100% to ERP, while a shared RDS instance can be split 70/30 between ERP and Analytics based on connection metrics.
Governance callout: Run requires both technical enforcement (tagging standards, CI lineage) and organizational discipline (owners accountable for remediation).
Where inputs live
Expect to gather data across multiple systems:
- TBM model exports → Tower/Sub-Tower cost
- CMDB/APM → relationships and lineage
- Cloud billing & telemetry → usage and tags
- Endpoint directories → user/device counts
- Service desk platforms → tickets, effort, incidents
What matters most:
- Stable keys → Solution IDs, CI/resource IDs
- Consistent tagging → solution_id, environment, data_source
- Time alignment → usage and cost cover the same reporting period
SaaS – For SaaS categories, use license seats or monthly active users (MAU) as the preferred driver. If license data is unavailable, apply a temporary % split across in-scope Solutions. Immediately open a Procurement or Vendor Management task to obtain official license rosters so future allocations are usage-based.
Shadow IT – When technology spend surfaces outside IT (e.g., in departmental cost centers), keep it transparent by routing to the Shared/Unassigned bucket with clear labeling. Then trace spend back to the appropriate department or portfolio, and work with Finance and BU leaders to reclassify under the correct Solution. Treat this as a remediation path, not a permanent state.
How to Choose Your Solution Mapping Approach
Choosing the right starting point for mapping costs to Solutions is less about ambition and more about matching your data and governance maturity. Many organizations want to jump straight to the most advanced “Run” approach, but if the inputs aren’t ready—or the roles and tools aren’t in place—you risk creating noise, rework, and stakeholder distrust. A phased adoption (Crawl → Walk → Run) builds trust and demonstrates progress without overreaching.
1) Assess your current data foundation
Your data foundation sets the ceiling for how far you can go. Even the best allocation logic fails if the raw data isn’t stable or complete. The goal is to choose the highest process you can sustain with reliable, repeatable data, not just aspirational ideas.
- Only Tower/Sub-Tower costs available → Begin with Process A—Crawl, allocating costs with simple drivers (e.g., tickets, headcount, % splits).
Example: You may only have End-User and Service Desk totals with no user counts—this is a Crawl scenario. - Partial usage or relationship data in CMDB/APM/telemetry → You can move to Process B—Walk, where you use consumption signals to drive allocations.
Example: Your storage systems can report GB-months by application, but compute still lacks consistent tags. - Resource-level data with tags/IDs in place → Adopt Process C—Run, attributing costs directly to servers, volumes, cloud accounts, or devices.
Example: If your AWS accounts already carry solution_id tags across projects, you’re ready for Run.
Acceptance tip: If more than 10% of your Sub-Tower costs are sitting in Shared/Unassigned, stay in Crawl until usage drivers improve.
2) Match organizational readiness
Even if the data exists, your organization’s ability to manage the process matters just as much. Governance, roles, and tools often make the difference between sustainable mapping and repeated rework.
- Governance & roles – Are there accountable Solution owners and driver stewards? If not, Crawl may be the safe entry point.
- Tooling – Can you automate joins, refreshes, and version control in ETL/BI/TBM platforms? Walk/Run approaches require more automation and auditability.
- Expectations – If leadership only needs a directional TCO baseline, Crawl is sufficient. If they expect detailed showback, optimization, or unit cost reporting, Walk or Run is the better match.
Quick Tip: Don’t overestimate readiness: Having tags in some systems doesn’t mean you can skip Crawl. Tags must be consistent, enforced, and governed across all cost-bearing resources.
3) Define target outcomes
Defining outcomes upfront prevents wasted effort. Your chosen process should deliver exactly what stakeholders are asking for—no more, no less.
- Quick baseline and directional visibility → Start with Crawl.
- Usage-informed trade-offs and optimization opportunities → Use Walk.
- Precise optimization, offering-level splits, and unit costs → Commit to Run.
Tip – Crawl is not a failure: Some of the most successful TBM teams start here, publish credible baselines, and then use the improvement roadmap to secure investment in data, tooling, and governance.
Quick Tip: Never skip Crawl unless you already have clean, enforced resource-level tags and governance in place. Each phase is designed to be repeatable, defensible, and to build confidence in the TBM model over time.
Crawl – Driver-Based Solution Mapping
Goal: Establish a fast, defensible way to push Resource Tower/Sub-Tower costs to Solutions using simple, transparent drivers. Crawl gives you a usable baseline that stakeholders can trust while you build toward more advanced usage- or resource-based models.
1. Prepare inputs and scope
This step ensures you have the right foundations in place before attempting to map Tower/Sub-Tower costs to Solutions. Many TBM initiatives stall because the catalog isn’t ready, policies aren’t aligned, or the data sources aren’t understood. Taking the time to prepare here will make the allocation steps defensible and repeatable.
1.1 Confirm or establish the Solution catalog
If your organization already has a Solution Catalog (often part of a Service or Product Catalog), confirm it is current and complete:
- Ensure each Solution has a stable solution_id and solution_name.
- Validate ownership — every Solution should have an accountable owner.
- Check whether Offerings exist, and decide whether they are mature enough to include at Crawl (often they are deferred to later stages).
If no catalog exists, don’t delay this exercise:
- Seed an initial list directly from the TBM Taxonomy v5.0 (Solutions only, not Applications).
- Review with Finance, IT leaders, and Service Owners to validate relevance.
- Treat this as a working catalog until your organization formally defines its own.
Starting Without a Catalog: Use the TBM v5.0 Solution list as your initial reference. It is better to start with a standardized, widely accepted taxonomy than to delay while waiting for a “perfect” catalog. You can refine and rename Solutions later as your practice matures.
1.2 Verify cost sources and supporting data
Driver-based mapping requires two inputs:
- Sub-Tower costs: Collect totals for the financial period (e.g., prior month or quarter).
- Simple drivers: Verify access to metrics such as:
- Ticket counts by Solution (Service Desk).
- User counts or device inventories (HR directories, endpoint systems).
- Owner-provided % splits when usage is unavailable.
Also check:
- Costs and drivers cover the same reporting period.
- Drivers are available for all in-scope Sub-Towers (so you don’t leave costs unmapped).
- You understand what relationship or usage data exists (CMDB, telemetry, APM), even if it won’t be used until later maturity.
1.3 Align policy decisions
Before allocating costs, align with stakeholders on three policy questions:
- Dev/Test inclusion: Will Dev/Test be included in Solution TCO, or reported separately?
- Shared/Unassigned: Will unmapped costs be made visible in a Shared/Unassigned bucket (best practice: yes)?
- Offerings: If Offerings are in your catalog, are they mature enough to be included now, or deferred until usage/resource drivers exist?
Document these choices for governance and consistency.
1.4 Define driver strategy (primary driver and fallback ladder)
Each Sub-Tower requires a primary driver and a structured fallback ladder.
- Primary driver = the metric you will use to allocate costs.
- Example: Service Desk → ticket counts by Solution.
- Example: End User → number of devices assigned to user groups aligned with Solutions.
- Fallback ladder = structured alternates if the primary driver is missing.
- Example: Service Desk → ticket counts → fallback to owner-provided % split → fallback to equal distribution.
Tip – Why fallback ladders matter
A fallback guarantees that all Sub-Tower costs are mapped in every cycle. Without one, costs remain unmapped, producing fallout and undermining trust in reports.
1.5 Add minimum fields for Crawl
Define your allocation file schema now with at least the following fields (see Unified Tags & Fields Reference for definitions):
- solution_id, solution_name, tower, sub_tower, period,
- allocation_driver, allocation_pct, driver_source,
- mapping_method=’driver’, environment, data_source,
- effective_start, version.
This ensures auditability and compatibility with later maturity processes.
1.6 Shared/Unassigned policy
Reinforce the Shared/Unassigned principle here:
- Any costs that cannot be credibly mapped must flow to a Shared/Unassigned bucket.
- Keep these costs transparent in reporting.
- Track them as a KPI (% of Tower cost) and assign accountability for remediation.
2. Collect and Normalize Drivers
Once you’ve defined your scope and confirmed which Sub-Towers you need to map, the next step is to gather the driver data that will determine how those costs are allocated to Solutions. This is the “engine” behind Crawl: the more defensible your drivers, the more credible your Solution allocations will be. The goal here is not to get perfect data, but to collect reasonable, transparent inputs that can be normalized into percentages.
2.1 Gather available drivers
Start by collecting the most practical and reliable drivers for each Sub-Tower:
- Service Desk → Ticket counts by Solution or category.
- End-User Services → Device inventories, user counts, or directory entries mapped to business units or Solutions.
- Applications / SaaS → License rosters or seat counts; if unavailable, request from Procurement.
- Shared Infrastructure (e.g., Servers, Storage) → Owner-provided % splits, or a temporary equal split if no usage data exists yet.
Tip – Don’t overcomplicate Crawl: At this stage, pick the most defensible single driver per Sub-Tower. Capture fallbacks (e.g., tickets → users → % split) so you can move forward even when usage data is patchy.
2.2 Align periods and populations
Drivers only work if they’re aligned with the costs you’re allocating. Before normalizing, check:
- Same time period: If you’re using March costs, make sure your ticket counts or user rosters are also from March
- Consistent population: If a Solution was introduced mid-year, ensure you only count drivers for the periods it was active
- Clean coverage: Exclude outliers like terminated users or duplicate devices, which can inflate allocations
This alignment step avoids the biggest source of fallout: mismatched bases that skew percentages.
2.3 Normalize driver data into percentages
Convert your raw driver data into allocation percentages per Sub-Tower and period:
- Divide each Solution’s driver quantity by the total for that Sub-Tower
- Ensure the percentages sum to 100%
- If some portion of spend cannot be credibly attributed, assign it to Shared/Unassigned (document the reason)
Worked Example – End-User Services
- User counts: Collaboration 6,000; ERP 3,000; Analytics 1,000
- Total = 10,000
- Allocation % = 60% / 30% / 10%
- If the Sub-Tower cost = $500,000, these percentages will be used in Step 4 to calculate allocated dollars
2.4 Record fallback ladders and assumptions
Document how you handled gaps so others can reproduce the process:
- Primary driver = tickets; fallback = % split if tickets missing
- Manual split provided by Solution owner; revisit after 90 days
- Missing SaaS roster → allocated 100% to Shared/Unassigned pending Procurement data
This metadata not only supports auditability but also makes it clear what to fix in order to progress from Crawl to Walk.
Output of Step 2: A normalized set of driver percentages (by Sub-Tower, Solution, and period) with documented assumptions and fallbacks. This dataset is now ready to be merged into an allocation table in Step 3.
3. Build the Allocation Table
Now that you have normalized driver percentages for each Sub-Tower, you need to transform those into a structured allocation table. This table is the bridge between your financial data (Sub-Tower costs) and your Solution TCO view. It provides a repeatable, auditable framework for applying allocations.
The key here is to design a table that is simple enough to manage in Crawl, but structured enough that you can expand it in Walk and Run without starting over.
3.1 Define the schema
Start by creating a consistent set of fields that capture what is being allocated, where it is going, and why. At minimum, include:
- tower — Name of the Resource Tower (e.g., End-User, Service Desk)
- sub_tower — The Sub-Tower where costs originate
- period — Financial period (month/quarter)
- solution_id — Unique identifier for the target Solution
- solution_name — Human-readable Solution name
- allocation_driver — Driver used (tickets, users, % split)
- allocation_pct — Normalized percentage applied to the Solution
- driver_source — System or method used to gather the driver
- mapping_method — Always driver in Crawl
- environment — Prod/Non-Prod designation (if included in scope)
- data_source — Origin of the Sub-Tower cost (e.g., GL, budget, TBM model extract)
- effective_start — Start date for this allocation rule
- version — Version number for tracking changes
Why schema matters: Don’t think of this table as a one-off spreadsheet. By defining a schema now, you make it possible to automate joins later in Walk/Run and to maintain version control for audits.
3.2 Populate rows
For each (tower, sub_tower, period) combination:
- Create one row per Solution receiving an allocation
- Enter the allocation_pct derived in Step 2
- Verify that the total allocation for each (tower, sub_tower, period) sums to 100%
- If there are unallocated costs, create a dedicated row for Shared/Unassigned with the appropriate percentage
Worked Example – Service Desk (March)
- Tower: End-User, Sub-Tower: Service Desk
- Solutions: Collaboration (52.5%), ERP (37.5%), Analytics (10%)
- Shared/Unassigned: 0%
- Period: Mar-2025
This results in 3 rows for March, one per Solution
3.3 Capture assumptions and rationale
Document any special cases or manual inputs directly in the table or in a linked metadata file:
- Example note: “Manual 70/30 split applied at owner request due to missing ticket data; revisit June 2025.”
- Example note: “ERP usage estimate based on device counts; expect telemetry upgrade in Q3.”
By capturing these notes early, you prevent confusion during audits or reviews six months later.
3.4 Validate table integrity
Before moving to Step 4, check that the table is:
- Complete — Every in-scope Sub-Tower and period is represented
- Balanced — Allocations sum to 100% for each Sub-Tower-period set
- Traceable — Each row clearly shows the driver, source, and method
Think forward: Even if you’re starting in Excel, design your table as if it will live in a BI or TBM platform. That mindset will save rework when you automate later.
Output of Step 3: A structured allocation table with rows by Solution, Sub-Tower, and period, fully documented and ready to join with cost data in Step 4.
4. Apply Allocations to Cost
With your allocation table built, the next step is to apply those percentages to actual Sub-Tower costs. This is where the model moves from theoretical mapping to dollarized Solution costs that stakeholders can recognize and validate.
The objective is straightforward: for each Sub-Tower, take its total cost and distribute it across Solutions in proportion to the allocation percentages you created.
4.1 Join cost data to allocations
Bring together two inputs:
- Sub-Tower cost data — pulled from your TBM model export, GL, or budget system. Must include at least tower, sub_tower, period, and cost.
- Allocation table — created in Step 3, with tower, sub_tower, period, solution_id, and allocation_pct.
Perform a join on tower + sub_tower + period. This ensures every Sub-Tower cost has a corresponding allocation set.
Granularity matters: If you track costs monthly, make sure allocations are also monthly. Joining a quarterly cost to monthly allocations will distort results.
4.2 Calculate allocated costs
Once the join is complete, calculate allocated cost for each row:
- Each row now represents the cost for a specific Solution, tied back to its Sub-Tower and period.
- Costs should be in dollars (or local currency) to align with financial reporting.
Worked Example – Service Desk
- Sub-Tower cost (March): $1,000,000.
- Allocations: Collaboration 52.5%, ERP 37.5%, Data & Analytics 10%.
- Calculated: $525,000 / $375,000 / $100,000.
Now you can say with confidence: “Collaboration incurred $525k of Service Desk cost in March.”
4.3 Roll up to Solution totals
After allocations are applied:
- Aggregate by Solution — Sum across all Towers/Sub-Towers to produce each Solution’s total cost.
- Optionally roll up to categories/types — If your catalog uses TBM’s hierarchy, you can roll from Solution → Solution Category → Solution Type.
- Offerings (if included) — Roll up from Offering → Solution totals.
The roll-up creates your first Solution TCO view: an end-to-end dollarized picture of how Tower costs flow into business-facing Solutions.
4.4 Handle Shared/Unassigned transparently
If you created rows for Shared/Unassigned in Step 3:
- Keep them visible as their own line items.
- Track them over time to show whether fallout is shrinking.
- Document plans to remediate (e.g., add license data, fix tagging).
Don’t hide Shared/Unassigned: Some admins are tempted to “spread” these costs evenly across Solutions. Avoid this—doing so creates false precision and damages trust when gaps resurface.
Output of Step 4: Each Solution has a calculated dollar value of cost (Solution TCO), directly traceable back to Sub-Tower costs and allocation percentages. Shared/Unassigned is transparent and documented.
5. Reconcile and QA
After applying allocations, reconciliation is your safety net. This step ensures no dollars have been lost, duplicated, or misapplied. Without it, even a simple misalignment in joins or percentages can ripple into reports, undermining credibility with stakeholders. Think of reconciliation and QA as your proof of accuracy before publishing Solution TCO.
5.1 Reconcile totals
The first rule of mapping is that money in must equal money out:
- For every (tower, sub_tower, period), confirm that the sum of all allocated Solution costs equals the original Sub-Tower cost (allowing for rounding).
- If totals don’t match, revisit your allocation table—look for missing rows, incorrect percentages, or mismatched keys.
Formula Check
If sub_tower_cost = $1,000,000, then all allocations combined must equal exactly $1,000,000.
5.2 Coverage checks
Next, confirm the allocations themselves are complete and balanced:
- Each Sub-Tower allocation must total 100%.
- Look for unexpected $0 allocations—these can indicate Solutions missing from your driver set.
- Confirm Shared/Unassigned is captured explicitly, not absorbed silently.
Quick QA Query (SQL-style)
SELECT tower, sub_tower, period, SUM(allocation_pct)
FROM allocation_table
GROUP BY tower, sub_tower, period
HAVING SUM(allocation_pct) <> 100;
Any rows returned indicate fallout or errors to fix.
5.3 Reasonableness checks
Even when the math balances, you must test whether the results make sense:
- Compare allocations against prior periods—do ratios shift dramatically without a business reason?
- Cross-check with stakeholder expectations—for example, if ERP Solutions typically consume ~40% of Service Desk tickets, but show only 10% in the results, investigate.
- Use trend charts (month over month) to quickly spot anomalies.
Example: If Collaboration spend suddenly drops by 80% in one month, check whether tickets were miscategorized or missing from the dataset.
5.4 Anti-double-count rule
Prevent the most common advanced error: double counting.
- If, in the future, you also attribute costs at the resource level (Run process), make sure those costs are excluded from element- or Tower-level allocations.
- Implement a QA check that verifies each cost stream is counted once and only once.
Quick Callout – Universal Rule For every (tower, sub_tower, period):
No overlaps, no omissions.
5.5 Document fallout
If some costs can’t yet be credibly assigned:
- Route them into the Shared/Unassigned bucket (created in Step 3).
- Record why (e.g., missing license data, incomplete CMDB, ambiguous ownership).
- Add remediation steps (e.g., “Procurement to deliver SaaS seat counts by next quarter”).
This ensures fallout is visible and actionable, not hidden or ignored.
Output of Step 5: You have fully reconciled allocations that tie back 1:1 to Sub-Tower costs, 100% coverage per allocation set, visible Shared/Unassigned, and clear documentation of any anomalies. Your mapping results are now defensible and auditable.
6. Validate with Stakeholders
Even the cleanest allocation tables need human validation. Stakeholder review ensures that the mapping is not just mathematically correct but also business-credible. This step closes the loop between the TBM model and the people who own, deliver, and consume the Solutions. Validation also builds trust—stakeholders are more likely to adopt and support TBM insights if they’ve seen and confirmed the logic behind them.
6.1 Prepare a short review pack
Create a clear, consumable summary that tells the story of how costs were mapped:
- Inputs: Show the cost sources, Sub-Towers, and drivers used.
- Methodology: Explain how drivers were applied (e.g., Service Desk tickets, End-User device counts, % splits).
- Outputs: Present Solution-level cost totals, Shared/Unassigned if applicable, and coverage percentages.
- Notes: Include any known limitations (e.g., “SaaS seat counts unavailable—temporary % split applied”).
Tip – Keep it simple: Use one page per Sub-Tower for review. Avoid overwhelming stakeholders with every line item.
6.2 Review with owners
Invite the right mix of stakeholders for a productive review:
- Platform or Tower owners (Service Desk, End-User, Infrastructure leads) who understand operations.
- Solution owners who will inherit the cost allocations.
- Finance or FP&A who care about consistency and credibility.
In the session:
- Walk through driver choices and why they were selected.
- Highlight outliers (e.g., unusually high allocation to one Solution).
- Encourage stakeholders to challenge assumptions—this is where you uncover missing context.
Example: If the Service Desk team questions why Collaboration consumes 50% of tickets, they may clarify that tickets were miscoded, prompting a fix in the driver source.
6.3 Adjust and document
Stakeholder feedback often reveals blind spots or quick wins:
- Update allocations: Adjust percentages, driver sources, or Solution mappings based on feedback.
- Capture change notes: Document what was adjusted and why.
- Version control: Increment the version field in your mapping file so future reviewers see a clear evolution.
- Log improvements: If feedback requires systemic changes (e.g., refining ticket categories), record them as data asks for future cycles.
Callout – Don’t chase perfection: Stakeholder sessions aren’t about getting every decimal point perfect. The goal is to confirm directional accuracy and credibility. Perfection comes later, as you progress toward Walk and Run.
Output of Step 6: Stakeholders have reviewed and validated your mappings, confirmed driver logic, and signed off on directional accuracy. Feedback is logged, adjustments are versioned, and the process has built trust across IT, Finance, and business owners.
7. Document, Version, and Store
Once stakeholder validation is complete, the next step is to lock in your work so it’s repeatable, auditable, and transparent. Documentation and version control aren’t just administrative overhead—they’re what makes your mapping defensible to Finance, IT, and auditors. Without proper governance, the model can drift, making future cycles harder to trust or reproduce.
7.1 Version
Treat each mapping cycle like a product release:
- Stamp a version number (e.g., v1.0, v1.1) in your mapping file.
- Record the effective_start date (when this mapping goes into effect) and, if relevant, an effective_end date.
- Include a change note summarizing what’s new (e.g., “Added Collaboration Solution allocations; adjusted Service Desk drivers to align with ticket coding”).
Tip – Use semantic versioning: Major versions (v1.0 → v2.0) signal significant changes in scope or method; minor versions (v1.1 → v1.2) track incremental refinements.
7.2 Store
Choose a governed, accessible repository so mappings are not trapped on a personal drive:
- Corporate document repository (SharePoint, Confluence, Google Drive) with version history.
- Data warehouse or BI platform (for structured joins and lineage tracking).
- TBM platform (if your tool supports version-controlled mappings).
Best practice is to store both:
- Raw mapping files (Excel, CSV, SQL tables) containing allocations.
- Summary outputs (pivot tables, PDFs, dashboards) showing Solution-level totals and coverage metrics.
Callout – Store centrally, not locally: Mappings should be visible to TBM Admins, Finance, and Solution/Tower owners. Central storage prevents rework and enables fast onboarding of new team members.
7.3 Log data asks
Capture every improvement opportunity uncovered during the cycle. These become your data quality backlog and the bridge to Walk and Run maturity. Examples:
- “Procurement to provide SaaS license rosters by Q2.”
- “Tagging policy required for cloud storage buckets.”
- “Service Desk team to refine ticket categorization by Solution.”
Log these asks in a tracker (Jira, ServiceNow, spreadsheet) with owners and due dates. Review the backlog each cycle to prioritize fixes.
Output of Step 7: A versioned mapping file, stored in a governed repository, with a clear changelog and a backlog of data improvements. Anyone reviewing the model later can trace exactly how costs were mapped, when, and why.
8. Publish and Refresh
The final step in the Crawl cycle is to share results transparently and establish a repeatable cadence for updates. Publishing makes the work visible to stakeholders, while refreshing ensures that allocations remain accurate and trusted over time.
8.1 Publish
Publishing is more than just sending a spreadsheet — it’s about delivering actionable views that stakeholders can interpret and use.
- Publish Solution TCO: Aggregate allocations to show the cost per Solution, broken down by Tower and Sub-Tower.
- Highlight Shared/Unassigned: Keep any costs that couldn’t be mapped visible as their own category. Show the percentage relative to the total and link to your remediation plan.
- Use multiple formats:
- Finance leaders: Summary PDF or dashboard showing total cost per Solution.
- Solution owners: Pivot table or BI dashboard with Sub-Tower splits.
- TBM/IT Finance teams: Detailed mapping tables with drivers and allocation logic.
Tip – Visualize coverage: Include a pie chart or bar chart that shows the % of Sub-Tower costs mapped to Solutions vs Shared/Unassigned. It builds confidence by demonstrating transparency and control.
8.2 Cadence
Set and communicate a regular refresh schedule that aligns with financial and governance cycles.
- Monthly: Ideal for organizations aligning to monthly financial close. Ensures Solution TCO reflects the latest cost and driver data.
- Quarterly: Acceptable when inputs are less volatile, but risks data drifting from business reality.
- Trigger-based: Re-run mappings when significant changes occur (e.g., new Sub-Tower, reorganization, or taxonomy update).
Maintain a changelog: Log each run’s version, effective date, and key notes. This becomes your audit trail and helps explain variances across cycles.
8.3 Plan next step
Each refresh cycle is also an opportunity to mature the model. Use findings from the current cycle to decide where to invest effort next:
- Expand coverage: Add additional Sub-Towers into the Crawl process (e.g., Network, Storage).
- Improve drivers: Replace % splits with measurable drivers (e.g., users, GB-months, tickets).
Prepare for Walk: Identify data improvements (tags, CMDB links, telemetry) that enable usage-informed mapping. Document them in your backlog.
Quick Tip – Consumer Handoff: Once Solution TCO is published, you can take the next step toward business accountability by attributing Solution costs to Consumers (business units, departments, or products). Use headcount, users, or tickets as drivers. See Mapping to Consumers for detailed guidance.
Output of Step 8: Published Solution TCO views (with Shared/Unassigned visible), a defined refresh cadence aligned to financial close, and a backlog of improvements pointing toward the Walk maturity stage.
Acceptance Criteria (Crawl)
You know your Crawl cycle is complete — and ready for stakeholder consumption — when the following criteria are met:
- 100% of Sub-Tower costs allocated
Every dollar from each in-scope Sub-Tower has been mapped to a Solution or explicitly routed to Shared/Unassigned. No residual costs remain unmapped. - Exact reconciliation with source totals
The sum of Solution allocations (plus any Shared/Unassigned) exactly equals the source Sub-Tower cost (allowing for rounding). A reconciliation check is documented and retained. - Transparent Shared/Unassigned bucket
If costs cannot yet be credibly assigned, they appear in a clearly labeled Shared/Unassigned bucket. The percentage of total cost is shown, and a remediation plan is logged with ownership and target reduction. - Versioned allocation tables stored
Allocation logic and results are saved with a unique version number, effective dates, and changelog notes. This ensures results are reproducible and audit-ready. - Published Solution TCO available to stakeholders
Stakeholders can access published reports that show Solution TCO by Tower/Sub-Tower, including charts or tables highlighting allocation coverage and Shared/Unassigned spend.
Improvement plan logged for progression to Walk
A documented backlog of data enhancements (e.g., tags, telemetry, CMDB relationships) is maintained. This defines the pathway to advance from driver-based mapping to usage-informed mapping.
Quick Tip – Crawl success isn’t about perfection. It’s about proving that your TBM model can move all Tower costs into business-facing Solutions in a way that is transparent, repeatable, and trusted.
Walk – Usage-Informed Solution Mapping
Goal: Improve accuracy by using usage and relationship signals (CMDB, CSDM, APM, telemetry, service desk) to convert Sub-Tower usage into Solution allocation percentages. Walk is the bridge between simple driver-based allocations (Crawl) and precise resource-level attribution (Run). At this stage, you begin to reflect actual consumption patterns in your allocations, giving business stakeholders more confidence in reported Solution TCO while still avoiding the complexity of instance-level tagging.
1. Define Scope and Readiness
Before you attempt usage-informed mapping, confirm that both your data and your organization are prepared. Unlike Crawl, Walk requires reliable signals of who is using what—and consistent governance to prevent drift.
1.1 Confirm in-scope Sub-Towers (elements)
Identify the Sub-Towers where you can credibly measure consumption by Solution. Use official TBM Taxonomy v5.0 names only, such as:
- Compute – typically allocated using CPU hours or VM counts.
- Storage – allocated using GB-months or IOPS.
- Network – allocated using bandwidth or egress GB.
- Database – allocated using DB instance hours or connection counts.
- Middleware – allocated using transactions or usage logs.
- End-User – allocated using device counts or login records.
Service Desk – allocated using tickets or work effort hours.
Tip – Prioritize readiness: If a Sub-Tower doesn’t yet have reliable usage metrics, keep it in Crawl for now. Don’t “force” Walk—document the data fixes required instead.
1.2 Verify relationship & usage data
Usage-informed mapping depends on lineage and telemetry. Verify that the following data sources exist and are trusted:
- CMDB or CSDM relationships: Map Solutions → services/apps → Configuration Items (CIs). Example: ERP Solution → SAP application → database server cluster.
- Application Portfolio Management (APM): Provides mappings between business applications and underlying infrastructure.
- Telemetry: Metrics such as CPU hours, GB-months of storage, or network throughput from monitoring tools.
- Service Desk systems: Ticket volume or labor hours categorized by Solution or service type.
If any data is missing or inconsistent (e.g., CMDB has partial relationships), capture this as a data gap log. You’ll need this for future Run-level maturity.
1.3 Align policy decisions
Establish the rules for how you’ll treat special cases before building allocation tables. This ensures consistency across Sub-Towers.
- Environment inclusion: Apply your organization’s Environment Policy (e.g., include Dev/Test or keep separate). Document the choice and apply consistently.
- Offerings: Only split Solutions into Offerings if you have reliable drivers (e.g., license tier counts, feature bundle usage). Otherwise, allocate at Solution level.
Shared platforms: For infrastructure serving multiple Solutions where usage is indistinguishable, allocate what you can and route the remainder into Shared/Unassigned. Document the remediation plan (e.g., “add tags to shared buckets by Q3”).
Callout – Shared/Unassigned: This is not a dumping ground. Keep it visible in reporting and track it as a KPI to ensure it trends down over time.
1.4 Add minimum fields for Walk
Extend the Crawl schema to capture more detail. Add these columns (see Unified Tags & Fields Reference):
- driver_qty – raw usage value (e.g., 5,000 GB-months).
- mapping_confidence – rating of allocation quality (High/Medium/Low).
- mapping_notes – explanation of assumptions or fallbacks.
- shared_unassigned_flag – Y/N to mark records not fully attributable.
- owner_team – accountable group for resolving fallout or data issues.
These fields give stakeholders confidence in your logic and allow QA teams to prioritize fixes.
Output of Step 1: A confirmed list of Sub-Towers in scope, validated usage/relationship data sources, documented policy decisions (environment inclusion, Shared/Unassigned handling, Offerings), and a structured allocation schema with all minimum fields defined. This ensures you’re ready to build defensible usage views without scope drift or policy ambiguity.
2. Build and Normalize Solution Usage Views
Once you’ve defined your scope and confirmed that usage or relationship data exists, the next step is to translate that raw telemetry into a structured view of how Solutions are actually consuming Sub-Tower resources. This is the heart of Walk: the stronger your usage views, the more accurate and defensible your allocations will be. The goal here isn’t perfection—it’s to produce clean, normalized usage signals that can be converted into allocation percentages.
2.1 Establish lineage
Start by defining the path from Solutions down to the infrastructure elements that support them. This creates a logical chain you can explain to auditors and stakeholders later.
- Example (ERP Solution): ERP → SAP application → DB servers (CI group) → CPU hours.
Example (Collaboration): Collaboration → MS Teams → storage volumes → GB-months.
Tip – Write it plainly: Don’t just document the join logic in SQL or a BI tool. Capture the lineage in plain language: “ERP Solution costs are allocated based on CPU hours for SAP application servers listed under the ERP service in the CMDB.”
2.2 Aggregate usage by Solution and Sub-Tower
With lineage defined, calculate total usage values for each Solution within each Sub-Tower. Common usage metrics include:
- Compute: CPU hours or VM instance hours.
- Storage: GB-months consumed.
- Network: Egress GB or bandwidth consumed.
- Database/Middleware: DB hours or connection counts.
- End-User: Device counts or login sessions per Solution population.
- Service Desk: Tickets or hours tagged to Solutions.
Worked Example – Storage:
- Collaboration = 32,000 GB-months
- ERP = 18,000 GB-months
- Data & Analytics = 20,000 GB-months
- Shared/Unassigned = 5,000 GB-months
- Total = 75,000 GB-months
Here, 70,000 GB-months are attributable to Solutions, while 5,000 remain Shared/Unassigned until tagging improves.
2.3 Cleanse and harmonize usage data
Raw usage often comes messy—different sources, inconsistent units, or overlapping feeds. Before you normalize:
- Align periods: Ensure usage covers the same period as your cost data (e.g., March usage for March costs).
- Standardize units: Convert into comparable measures (e.g., all storage in GB-months).
- De-duplicate: If the same VM is reported by both monitoring and CMDB, remove duplicates.
Normalize populations: Verify that usage data covers the full Solution scope for the period (no missing applications or business services).
Caution – Period drift: Monitoring tools often use rolling 30-day windows, which can misalign with calendar-month costs. Adjust periods so usage matches your financial close dates.
2.4 Document assumptions and gaps
Record what you had to assume so others can follow your logic and so you know what to fix in future cycles:
- Fallback applied: “Primary driver = GB-months; fallback = CI counts for workloads missing tags.”
- Shared usage held aside: “5,000 GB-months placed in Shared/Unassigned pending storage tag fixes.”
- Manual inputs: “ERP Solution owner provided a 60/40 split across database clusters until telemetry improves.”
Output of Step 2: A clean, aggregated dataset of usage quantities (by Solution, Sub-Tower, and period), with lineage paths, normalized units, and documented assumptions. This dataset is now ready to be converted into allocation percentages in Step 3.
3. Convert Usage to Allocation Percentages
Once you’ve built your usage views, the next step is to transform those raw usage quantities into allocation percentages that can be applied to Sub-Tower costs. This is where usage turns into money: by normalizing, you ensure every Sub-Tower’s spend flows into Solutions in a transparent, auditable way.
3.1 Normalize usage per Sub-Tower
Divide each Solution’s usage by the total attributable usage for that Sub-Tower, so percentages sum to 100%.
Formula:
- Example (Storage):
- Collaboration = 32,000 GB-mo → 32,000 ÷ 70,000 = 45.7%
- ERP = 18,000 GB-mo → 18,000 ÷ 70,000 = 25.7%
- Data & Analytics = 20,000 GB-mo → 20,000 ÷ 70,000 = 28.6%
- Shared = 5,000 GB-mo → stays visible in Shared/Unassigned, not included in percentages.
Tip – Document total base: Always note the denominator used for percentages (e.g., 70,000 GB-months). This makes your math reproducible during audits.
3.2 Handle shared and multi-solution components
Not all usage is cleanly attributable. Create explicit rules:
- Shared/Unassigned: Keep visible as a dedicated row, with a remediation plan. Example: “5,000 GB-months untagged → flagged for storage team to fix.”
- Multi-solution resources: If one element serves multiple Solutions (e.g., a shared DB), split based on observed usage (queries, sessions). If unavailable, use a temporary % split agreed with owners.
3.3 Apply fallback ladders
Define and document your fallback hierarchy so allocations don’t stall when usage is missing:
- Primary: Metered usage (CPU hours, GB-months, tickets).
- Secondary: CI counts (e.g., number of VMs or storage buckets per Solution).
- Tertiary: Manual % split, time-boxed until usage improves.
Caution – Avoid silent swaps: If you fall back from metered usage to CI counts, note it clearly in metadata so stakeholders know where precision was lost.
3.4 Worked example – Storage allocation
- Total attributable usage = 70,000 GB-months.
- Percentages: 45.7% Collaboration, 25.7% ERP, 28.6% Data & Analytics.
- Sub-Tower cost = $500,000.
- Allocations: $228,500 → Collaboration, $128,500 → ERP, $143,000 → Data & Analytics.
- Shared 5,000 GB-months stays visible in reports, pending remediation.
Output of Step 3: A normalized allocation dataset that expresses usage quantities as percentages per Solution, Sub-Tower, and period. This dataset is now ready to be merged into the allocation table in Step 4, ensuring every cost dollar flows in proportion to actual consumption.
4. Build the Element→Solution Allocation Table
Once you have normalized percentages, you need a formal allocation table. This table is the single source of truth that links Sub-Tower costs to Solutions. Without it, your math lives in scattered spreadsheets and can’t be governed or repeated. Think of the allocation table as the “bridge” between usage data and cost attribution.
4.1 Define schema
Design a schema that captures identity, driver, allocation, and governance fields. At minimum, include:
- tower
- sub_tower
- period
- solution_id
- solution_name
- allocation_driver (e.g., GB-months, CPU hours, tickets)
- driver_qty (numeric driver value for that Solution)
- allocation_pct (normalized percentage of Sub-Tower)
- driver_source (system that produced driver data, e.g., CMDB, telemetry)
- mapping_method = ‘usage’
- mapping_confidence (High/Medium/Low)
- mapping_notes (rationale, fallback used, exceptions)
- effective_start, effective_end
- version
Tip – Add lineage_ref: Include a lineage_ref (ETL job ID, BI run ID, or pipeline reference) so admins can trace back to the exact run that produced the mapping.
4.2 Populate rows
- For each Sub-Tower and period, create one row per Solution with its driver value, percentage, and metadata.
- Include Shared/Unassigned as its own Solution row if applicable.
- Example – Storage Sub-Tower (Mar 2025):
- Collaboration – 32,000 GB-mo → 45.7%
- ERP – 18,000 GB-mo → 25.7%
- Data & Analytics – 20,000 GB-mo → 28.6%
- Shared/Unassigned – 5,000 GB-mo (flagged, not included in percentages)
This ensures the table adds up cleanly and keeps fallout visible.
4.3 Attach governance metadata
Document key choices so the table is transparent to future reviewers:
- Fallback applied? (e.g., CI counts used instead of GB-months).
- Shared flagged? (e.g., 12% of usage remains unmapped).
- Confidence rating? (e.g., High for Collaboration, Medium for ERP due to weak telemetry).
- Notes for remediation: (e.g., “Storage team must enable tags on top 5 buckets”).
This metadata supports auditability and makes it clear how to progress from Walk to Run.
4.4 Worked example – Allocation table row
tower | sub_tower | period | solution_id | solution_name | allocation_driver | driver_qty | allocation_pct | driver_source | mapping_method | mapping_confidence | mapping_notes | effective_start | version |
Compute | Storage | 2025-03 | SOL-001 | Collaboration | GB-months | 32,000 | 45.7% | Telemetry | usage | High | Clean metering data | 2025-03-01 | v1 |
Compute | Storage | 2025-03 | SOL-002 | ERP | GB-months | 18,000 | 25.7% | Telemetry | usage | Medium | Missing tags on 2 DB volumes | 2025-03-01 | v1 |
Compute | Storage | 2025-03 | SOL-003 | Data & Analytics | GB-months | 20,000 | 28.6% | Telemetry | usage | High | Fully tagged buckets | 2025-03-01 | v1 |
Compute | Storage | 2025-03 | SHARED | Shared/Unassigned | GB-months | 5,000 | — | Telemetry | usage | Low | Untagged storage pending fix | 2025-03-01 | v1 |
Output of Step 4: A structured allocation table (Solution × Sub-Tower × period) with percentages, driver quantities, and governance metadata. This becomes the authoritative dataset you will use in Step 5 to apply costs, calculate allocations, and perform reconciliation.
5. Apply Allocations and Reconcile
At this step, you take the normalized allocation table you built in Step 4 and apply it to actual Sub-Tower costs. The goal is simple but critical: every dollar in a Sub-Tower should be mapped to a Solution (or transparently flagged as Shared/Unassigned). This is also the point where you perform checks to ensure no costs are lost, double-counted, or misapplied.
5.1 Join cost to allocations
- Join keys: Match Sub-Tower cost records with your allocation table using tower, sub_tower, and period.
- Input cost source: Use the reconciled Sub-Tower totals from your Resource Tower process.
- Integrity check: If costs don’t match to an allocation row, that’s fallout—flag it and assign to Shared/Unassigned until resolved.
5.2 Calculate allocated costs
For each Solution row in the allocation table, compute:
- Ensure rounding rules are consistent (e.g., 2 decimals or to the nearest cent).
- Maintain precision in intermediate calculations to avoid creeping reconciliation errors.
Worked Example – Storage Sub-Tower (Mar 2025):
- Sub-Tower cost = $500,000.
- Collaboration allocation % = 45.7% → $228,500.
- ERP allocation % = 25.7% → $128,500.
- Data & Analytics allocation % = 28.6% → $143,000.
- Shared/Unassigned = $0 (kept as a visible row, with 5,000 GB-months tagged for remediation).
5.3 Reconcile totals
Perform a reconciliation at multiple levels:
- Sub-Tower level: Confirm that the sum of allocated costs per (tower, sub_tower, period) = the original Sub-Tower total (± rounding).
- Tower level: Aggregate allocations by Tower and compare with Tower totals.
- Global check: Ensure global sum of allocations matches total IT spend flowing into Solutions.
If reconciliation fails:
- Recheck allocation_pct values (must sum to 100% per Sub-Tower).
- Recheck that Shared/Unassigned rows are excluded from percentage math but remain transparent.
5.4 Quality assurance (QA) checks
In addition to reconciliation, apply QA tests:
- Coverage: Verify no Solution has an allocation_pct of 0% with a nonzero driver_qty (signals a data error).
- Reasonableness: Compare allocations to prior periods or expected consumption patterns. Example: if ERP storage suddenly doubles while usage data hasn’t changed, investigate telemetry.
- Trend deltas: Produce period-over-period variance reports. Large swings (>20%) should always be explained.
5.5 Anti-double-count rule
Walk is still usage-based, but as you mature toward Run, you may start attributing costs at the resource level. To avoid over-reporting, enforce the anti-double-count rule:
- If a cost stream has been attributed at the resource level, suppress its overlapping element-level allocation.
- Maintain a flag or reference (e.g., override_flag=Y) so automated jobs know which element allocations to exclude once resource-level mapping exists.
This rule keeps your Solution TCOs credible as you transition from Walk to Run.
Output of Step 5: A fully costed allocation dataset where every Sub-Tower dollar has been assigned to Solutions (or visibly held in Shared/Unassigned). Reconciliation reports confirm accuracy, and QA tests provide confidence for stakeholder review in Step 6.
6. Validate with Stakeholders
Even if your math is correct, your allocations must also be understandable and accepted by the people who will use them — platform leads, Finance partners, TBM admins, and Solution owners. Step 6 ensures your Walk outputs align with operational knowledge and business expectations. Without this validation, even technically correct allocations may face resistance.
6.1 Review with platform leads and owners
- Audience: Infrastructure/platform leads (e.g., Storage, Compute), Finance/TBM teams, and Solution owners.
- What to present:
- The usage drivers you applied (e.g., GB-months, CPU hours, tickets).
- The lineage path (Solution → Service/App → CI/element).
- Allocation percentages and dollar outputs by Sub-Tower.
- Any Shared/Unassigned portions with notes on remediation.
- How to frame it: Walk them through one Sub-Tower at a time, showing how usage translates into costs.
Example: “In March, Storage Sub-Tower costs of $500k were allocated as: Collaboration 46%, ERP 26%, Analytics 28%. Shared 5,000 GB-months remain unallocated and are logged for remediation. Does this match what you observe in platform usage?”
6.2 Adjust and document
- Incorporate feedback: If stakeholders identify anomalies (e.g., ERP is under-allocated because a key database was missed), adjust the allocation table.
- Update metadata: Capture the rationale for changes in mapping_notes and increase the version number.
Confidence tagging: Adjust mapping_confidence from Medium → High if changes increase reliability, or leave as Medium/Low if gaps remain unresolved.
Tip – Use visible deltas: Present before-and-after comparisons when changes are made, so stakeholders understand the exact impact of adjustments.
6.3 Set improvement targets
Stakeholder validation isn’t just about this cycle — it’s also about building momentum toward Run. Define and log measurable improvement targets with owners. Examples:
- Shared Storage: Reduce Shared allocation from 12% to <5% by improving S3 bucket tagging within 90 days.
- Driver fidelity: Replace CI counts with metered GB-months for Database Sub-Tower by next quarter.
- Shadow IT: Work with Procurement to integrate SaaS license rosters for 3 missing vendors.
These targets create a roadmap and accountability for continuous improvement.
Output of Step 6: Validated allocation results endorsed by platform and business owners, updated allocation tables with documented changes, and logged improvement targets that prepare your organization to progress toward Run.
7. Operationalize and Improve
Once allocations have been validated with stakeholders, the next challenge is making the process sustainable. Walk isn’t just a one-time analysis — it’s a recurring cycle that needs to run reliably every period (monthly, quarterly, or aligned with financial close). Step 7 is where you move from “pilot spreadsheets” to repeatable operational practice, supported by automation, governance, and quality controls.
7.1 Implement in tooling
- Choose your platform: Depending on organizational maturity, you might start with Excel or SQL queries, but most organizations will want to operationalize Walk allocations in:
- ETL/BI tools (e.g., Alteryx, Power BI, Tableau Prep).
- Your TBM platform (Apptio, Nicus, etc.), which often includes built-in allocation modules.
- Custom pipelines (Python + pandas, SQL in data warehouse).
- Key requirement: Whatever tool you choose, the logic must be documented, version-controlled, and reproducible. If a report breaks during a close, you should be able to trace exactly what changed.
7.2 Automate refresh and data quality
- Set refresh cadence: Schedule monthly (or financial-close aligned) runs of the Walk process.
- Automate QA tests: Build checks into your pipeline to catch:
- Missing usage data for one or more Solutions.
- Allocations that don’t total 100% for a Sub-Tower.
- Negative or zero usage quantities.
- Large unexplained shifts vs. prior period (e.g., Storage for ERP jumps from 20% → 60%).
- Alerts: Configure dashboards or automated notifications when fallout thresholds are exceeded (e.g., >5% of Sub-Tower cost remains Shared/Unassigned).
Example QA test:
If , flag error and stop load until fixed.
7.3 Monitor maturity KPIs
Track a small set of key performance indicators (KPIs) each cycle to prove improvement and build trust:
- % Tower cost allocated by usage vs fallback — measure how much of your Walk allocations are truly usage-based vs proxy splits.
- Shared/Unassigned % by Sub-Tower — target a steady reduction over time (e.g., from 12% → 5% → <2%).
- % usage records with valid Solution lineage — confirm that CMDB/APM relationships or tags are improving quarter over quarter.
These KPIs should be published alongside Solution cost reports so stakeholders see not just “the numbers,” but also the quality of the process behind them.
7.4 Plan the path to Run
Walk is valuable on its own, but it’s also a staging ground for direct resource attribution (Run). Capture gaps and improvement actions needed to progress:
- Missing resource tags (e.g., solution_id not consistently applied in cloud billing).
- Incomplete telemetry (e.g., only aggregate GB-months available, no per-bucket detail).
- Weak CI relationships (e.g., Solution → App mapping incomplete in CMDB).
Document these gaps in a backlog and assign them to system owners (cloud, storage, CMDB, etc.). This makes the Run transition deliberate and planned, not accidental.
Output of Step 7: A repeatable, tool-based Walk allocation process that refreshes automatically, flags fallout early, reports improvement KPIs, and maintains a backlog of data quality and integration tasks required to advance toward Run.
8. Publish and Refresh Cadence
With allocations operationalized and QA checks in place, the last step is to make Walk outputs visible, consistent, and trustworthy to stakeholders. Publishing is more than generating a report — it’s about embedding Solution-level cost transparency into your recurring financial cycle.
8.1 Publish Solution TCO with transparency
- What to publish: Deliver a Solution TCO report that includes:
- Costs by Solution, Tower/Sub-Tower, and period.
- Usage drivers (e.g., GB-months, CPU hours, ticket counts) that explain the allocations.
- Coverage metrics (e.g., % Shared/Unassigned).
- Presentation: Include both financial and non-financial metrics (e.g., usage volumes) so stakeholders can see the linkage between costs and drivers.
- Delivery method: Depending on your tools, this may be:
- A dashboard in your TBM platform.
- A BI report (Power BI, Tableau, Qlik).
- A static pack for monthly/quarterly business reviews.
Tip – Transparency builds trust: Always show drivers and Shared/Unassigned explicitly. Hiding them makes fallout invisible and erodes credibility.
8.2 Establish refresh cadence
- Monthly or quarterly runs: Align with financial close or business reporting cadence.
- Changelog discipline: Each cycle, update a changelog noting:
- Data improvements (new usage feeds, better lineage).
- Policy changes (e.g., Dev/Test inclusion).
- Variances explained (e.g., Solution X usage spiked due to migration).
- Version control: Store every allocation run with a unique version ID so reports can always be tied back to their source mappings.
8.3 Communicate improvements and next steps
- Highlight progress: Show trends for fallout reduction (e.g., Shared Storage decreased from 12% → 6%).
- Announce maturity progress: Call out when a category has moved from Crawl → Walk or Walk → Run.
- Educate stakeholders: Include a “what’s next” section so business, Finance, and IT know what improvements are underway.
Next hop: Consumers
After publishing Solution costs, you may need to extend accountability to business consumers.
- Optional step: Attribute Solution costs to Consumers (business units, products, or projects) using drivers such as headcount, users, or tickets.
Acceptance Criteria (Walk)
To confirm the Walk process is both repeatable and defensible, ensure all the following conditions are met:
- Documented usage drivers exist for every in-scope Sub-Tower (with fallbacks clearly stated).
- Reconciled allocations: Sum of Solution allocations equals source Sub-Tower totals (± rounding) with no double counting.
- Transparent Shared/Unassigned bucket is visible, tracked, and shows a trend toward reduction.
- Versioned allocation tables are stored in a governed repository, with lineage and assumptions documented.
- Published Solution TCO report includes both cost and usage metrics, along with QA checks and coverage metrics.
- KPIs tracked consistently each cycle:
- % Tower cost allocated by usage vs fallback.
- Shared/Unassigned % by Sub-Tower.
- % usage records with valid Solution lineage.
Run – Direct Resource Mapping
Goal: Attribute individual resources (servers, cloud services, databases, containers, devices) directly to Solutions—and optionally to Offerings—for the highest-fidelity Solution TCO, unit costs, and optimization decisions.
Why this matters: Unlike driver/usage approaches, Run ties every dollar to a named resource and every resource to a specific Solution. That precision earns trust with Finance and product leaders—but only if you establish strong foundations (catalog, tags, lineage, policy) before loading data.
1. Prepare prerequisites and scope
This step lays the groundwork so your resource-level mappings are repeatable, auditable, and reconcilable. Many Run attempts fail not because of math, but because the catalog isn’t stable, tags are inconsistent, or policies are unclear. Do the checks below first; they save weeks of rework later.
1.1 Confirm or establish the Solution catalog
If you already have a Solution (or Service) Catalog:
- Ensure each Solution has a stable solution_id and solution_name.
- Confirm ownership (an accountable owner for each Solution).
- Decide whether Offerings will be included now. Only include them if reliable drivers (e.g., license tier, feature bundle, product plan) exist.
If you do not have a catalog:
- Seed an initial list using the TBM Taxonomy v5.0 Solutions (do not treat Applications as Solutions in v5.0).
- Review with Finance, IT leaders, and Solution owners to validate relevance.
Treat this as a working catalog and formalize it as your practice matures.
Callout – No catalog? Don’t stall. Starting with the v5.0 Solution list is better than waiting for a “perfect” catalog. You can refine names and add Offerings later without losing momentum.
1.2 Define Run scope by resource class and Sub-Tower
Pick the resource types you will map in this cycle, and tie each to the correct v5.0 Sub-Tower so reconciliation works:
- Compute: servers, VMs, hypervisors → Compute Sub-Tower
- Storage: volumes, buckets, file shares → Storage Sub-Tower
- Network: load balancers, gateways, firewalls → Network Sub-Tower
- Databases & Middleware: DB instances, app servers, message brokers → Database / Middleware Sub-Towers
- Cloud services: projects/accounts and managed services → appropriate Sub-Tower by service
- Containers/serverless: namespaces, workloads, functions → Compute/Platform Sub-Towers
- End-user devices (optional): only if device ownership maps cleanly to a Solution population
Tip – Prioritize impact. Start with 1–2 high-spend categories (e.g., cloud compute/storage) to maximize value and demonstrate the pattern.
1.3 Verify cost provenance and granularity
Confirm where the dollars will come from and at what level:
- Preferred: Direct resource cost (e.g., cloud line items, asset depreciation schedules, software entitlements).
- Fallback: If only Sub-Tower totals exist, plan to apportion Sub-Tower cost down to resources first (by usage, size, or count), then map resources to Solutions.
- Ensure every resource record can be tied back to its Tower/Sub-Tower for reconciliation.
Caution – Mixed granularity. Never blend direct resource cost and apportioned cost for the same resource in the same period.
1.4 Standardize identifiers and tagging
Enforce a common tag/key set across CMDB, cloud, and inventories. Minimum required:
- solution_id (stable key to the catalog)
- environment (prod / non-prod)
- resource_id (globally unique per resource)
- tower, sub_tower (source provenance)
- data_source (CMDB, CUR, APM, inventory)
- (Optional but recommended) owner_team, lineage_ref (job/run that produced the row)
Create and publish a tagging standard (names, allowed values, who sets them, where enforced). For cloud, configure provisioning policies or guardrails that require tags before launch.
Tip – Tag exceptions with expiry. If you must onboard untagged resources, mark them with an expiry date and route a remediation ticket on day one.
1.5 Align governing policies
Document rules once and apply them everywhere:
- Dev/Test handling: Included in TCO or reported separately? Tag and treat consistently.
- Shared/Unassigned: Keep untagged or ambiguous resources visible (do not smear across Solutions). Track as a KPI and assign remediation owners.
- Offerings: Only attribute to Offerings where drivers are reliable; otherwise attribute at the Solution level.
- Anti-double-count precedence: Resource-level attribution overrides element-level (Walk) for the same cost slice. Your QA must enforce this.
Policy anchor: For every (tower, sub_tower, period):
1.6 Validate data readiness (set a “readiness bar”)
Check each primary source against minimum thresholds before proceeding:
- Cloud tags present: solution_id on ≥ 85% of monthly cost; named owner/project tags on ≥ 95%.
- CMDB lineage: Solution → service/app → CI path exists for ≥ 80% of in-scope CIs.
- APM/telemetry: Usage signals (CPU hrs, GB-months, connections) available for multi-Solution resources where splitting is required.
- Duplicates & orphan checks: No duplicate resource_ids; orphaned CIs are < 2% of the inventory.
If you miss the bar, document remediation actions (who, what, when) and either narrow scope or hold those categories at Walk.
Tip – Prove it with a pilot. Run a 2-week pilot on one resource class to confirm that tags, lineage, and costs reconcile end-to-end.
1.7 Build the remediation backlog & rollout plan
Create a living backlog of the gaps discovered above:
- Missing tags on buckets/volumes → open tasks with platform owners
- CMDB relationship fixes for key apps → assign to service owners
- Telemetry enablement for DB or container platforms → assign to Ops/Observability
Prioritize by dollar impact and dependency (what unblocks more Run coverage next month).
Output of Step 1:
A validated Solution catalog (or a seeded one), a clearly defined Run scope by resource class and Sub-Tower, documented cost provenance, enforced tag/ID standards, aligned policies (Dev/Test, Shared/Unassigned, anti-double-count), a minimum mapping schema with governed storage, and a remediation backlog with owners and dates. You are now ready to assemble the authoritative resource inventory in Step 2.
2. Build an Authoritative Resource Inventory
Direct resource attribution lives or dies on the quality of your inventory. If you don’t know what resources exist, how they’re tagged, or how they relate to Sub-Towers, you can’t credibly push their costs into Solutions. This step is about pulling together a single, trustworthy view of your resources—whether on-premises, in the cloud, or in containers—and ensuring it aligns with your financial reporting periods.
Think of the inventory as your foundation: it should tell you what resources exist, how they’re identified, where they belong, and when they were active. Without this, attribution quickly turns into guesswork.
2.1 On-premises and virtual resources
Start with servers, VMs, and hypervisors managed through CMDB, orchestration tools, or virtualization platforms:
- Confirm each has a unique resource_id and consistent classification to a Sub-Tower (e.g., Compute, Storage).
- Standardize metadata such as hostname, environment (prod/dev/test), and owner.
- Where gaps exist (e.g., missing CI records), flag these resources for remediation rather than forcing them into a Solution.
2.2 Cloud resources
Pull billing exports and resource catalogs from cloud providers (AWS, Azure, GCP):
- Verify that tags like solution_id and environment are consistently applied.
- Normalize account/project IDs into your Resource Tower/Sub-Tower framework.
- If tagging is inconsistent, document untagged items and route them to Shared/Unassigned until corrected.
Callout – Cloud pitfalls: Cloud billing often includes thousands of line items. Focus first on high-value services (EC2, RDS, S3, GCS, Azure SQL) before expanding.
2.3 Storage, network, and platform services
Gather data on volumes, buckets, gateways, managed databases, and middleware:
- Confirm consistent identifiers (volume_id, bucket_name, db_instance).
- Where shared across multiple Solutions, note the need for split attribution in Step 3.
- Ensure Sub-Tower mapping is correct (e.g., don’t mix database instances into Compute).
2.4 Containers and serverless functions
Include Kubernetes namespaces/workloads and serverless functions:
- Require labels carrying solution_id and environment.
- Map workloads to Sub-Towers (e.g., Containers under Compute).
- Keep Shared/Unassigned visible when workloads lack labels.
2.5 End-user devices (optional)
Only include if device ownership can clearly be mapped to Solutions:
- Examples: dedicated kiosks for a customer-facing product.
- Avoid including general laptops/desktops unless you have a defensible link to a Solution population.
2.6 Harmonize and time-align
Bring it all together:
- Standardize key columns: resource_id, resource_type, tower, sub_tower, solution_id (if tagged).
- Align extract periods with your financial reporting (e.g., March costs → March resource inventory).
- Handle lifecycle events: tag decommissioned resources with end dates; don’t leave them “active” forever.
Output of Step 2: A consolidated, time-aligned inventory of resources (on-prem, cloud, storage, containers, devices) with unique identifiers, mapped to Sub-Towers, tagged with Solution IDs where possible, and Shared/Unassigned explicitly captured for anything missing. This dataset forms the base for attribution rules in Step 3.
3. Establish Solution Attribution Rules
With your resource inventory in place, the next step is deciding how each resource will be tied to a Solution. This step turns raw identifiers, tags, and relationships into actionable mapping rules. Without clear, repeatable rules, your attributions will drift over time and lose credibility.
Think of this step as writing the playbook: for each type of resource, define whether you will rely on tags, relationships, or usage splits—and what fallback you’ll use when the preferred method isn’t available.
3.1 Tag-based attribution
The gold standard for Run is direct tagging with solution_id (and environment).
- Require all new resources to be tagged at provisioning.
- Build automated checks to flag untagged resources.
- Untagged items flow into Shared/Unassigned with a remediation task assigned (e.g., tagging backlog in ServiceNow or Jira).
Worked Example – Cloud VM:
Resource: ec2/i-12345
Tags: solution_id=ERP, environment=Prod
→ 100% of its cost is attributed to the ERP Solution.
3.2 Relationship-based attribution
Not all environments use tags consistently. In these cases, rely on relationships in CMDB or APM tools:
- Traverse: Solution → Application/Service → CI (e.g., database instance, VM).
- Attribute the cost of the CI to the Solution(s) it supports.
- If a CI supports multiple Solutions, flag it for Step 3.3 (multi-solution splits).
Tip – Prefer direct links: A CI explicitly linked to a Solution is always more credible than indirect associations.
3.3 Multi-solution resources
Some resources (databases, storage clusters, middleware) serve multiple Solutions. These must be split, but only with evidence:
- Primary method: usage-based split (e.g., CPU hours, I/O, storage consumed).
- Fallback: equal split across in-scope Solutions, with an expiry date for remediation.
- Document the assumption so stakeholders know where precision needs to improve.
Worked Example – Database instance:
Cost = $4,000/month
Usage = ERP 70%, Analytics 30% (from connection metrics)
Attribution = $2,800 to ERP, $1,200 to Analytics.
3.4 Offerings (optional)
Only split into Offerings when you have reliable drivers (e.g., license tiers, feature bundles). Otherwise, keep everything at the Solution level to avoid false precision.
Callout – Guardrail: Offerings should never be guessed. If you don’t have usage data or license metrics, defer.
3.5 Non-production & shared services
Apply your documented Environment Inclusion and Shared/Unassignedpolicies from Step 1:
- If Dev/Test is excluded, tag and filter it out.
- If included, roll it up into the Solution TCO with a clear label.
For shared services (e.g., monitoring, authentication), keep costs transparent in Shared/Unassigned until usage drivers exist.
Output of Step 3: A defined set of attribution rules that cover all resources in your inventory. Each rule specifies the method (tag, relationship, usage split, or manual), the fallback path, and the treatment of Shared/Unassigned. This rulebook provides traceability and ensures your mappings can be reproduced and audited.
4. Create the Resource→Solution Mapping Table
Once attribution rules are defined, you need to translate them into a structured mapping table. This table is the core artifact of Run: it connects individual resources (VMs, databases, cloud services, devices) to the Solutions they serve. Think of it as the “ledger” that records every mapping decision—what resource was mapped, to which Solution, by what method, and under what assumptions.
Without this table, your mappings live in scattered scripts or spreadsheets. With it, you have a repeatable, auditable foundation for Solution TCO.
4.1 Define the schema
At a minimum, include the following fields (aligned with the Unified Tags & Fields Reference):
- tower / sub_tower – Provenance of the cost (e.g., Compute → Virtual Servers).
- period – Financial period (month/quarter).
- solution_id / solution_name – The Solution receiving cost attribution.
- offering_id / offering_name (optional) – If Offerings are in use.
- resource_id / resource_type – Unique identifier and type (e.g., ec2/i-123, VM, DB).
- mapping_method – Tag, CI_link, usage_split, or manual.
- allocation_pct – 100% for single-Solution resources; % split for multi-Solution.
- driver_qty – Evidence for usage splits (CPU hours, storage GB).
- mapping_confidence – H/M/L quality rating.
- mapping_notes – Rationale or caveats.
- data_source – Where the mapping came from (cloud billing, CMDB, APM).
- environment – Prod/Non-Prod, consistent with policy.
- owner_team – Who is accountable for fixing issues or gaps.
- effective_start / effective_end / version – Version control metadata.
- last_reviewed_by / date – Governance accountability.
- shared_unassigned_flag – Y/N to explicitly track unmapped resources.
Callout – Why schema matters: Even if you’re piloting in Excel, design the table as if it will feed BI/TBM tools. This saves rework when you automate.
4.2 Populate rows at scale
Populate one row per resource–Solution mapping per period:
- Use automated joins where possible:
- Cloud billing export + tags → direct joins to Solutions.
- CMDB CI relationships → joined to Solution IDs.
- For multi-Solution resources, populate one row per Solution with its allocation_pct.
- Manual mappings should be rare; flag them with low confidence and expiry dates.
Worked Example – Cloud Database:
Resource ID: rds/db-xyz
Tower/Sub-Tower: Database → Relational DB
Cost: $4,000 (from billing export)
Split: ERP 70% / Analytics 30% (from connection metrics)
Rows:
- ERP, allocation_pct=70, allocated_cost=$2,800
- Analytics, allocation_pct=30, allocated_cost=$1,200
4.3 Keep unresolved items visible
Not all resources can be mapped immediately. Instead of hiding them:
- Create rows with solution_id=Shared/Unassigned.
- Mark with shared_unassigned_flag=Y.
- Add notes like: “Missing tag; ticket raised for remediation.”
This keeps fallout transparent and measurable, rather than silently absorbed.
4.4 Attach governance metadata
To make the table auditable, always include:
- mapping_confidence – rate whether attribution is reliable.
- mapping_notes – capture rationale (e.g., “Manual 50/50 split until telemetry enabled”).
lineage_ref (optional) – pointer to the job/script/run that generated this mapping.
Tip – Audit-ready: Governance metadata is your shield during audits. If a number is questioned, you can show exactly how it was produced.
Output of Step 4: A structured, versioned mapping table that records every resource-to-Solution relationship, including unresolved cases, governance metadata, and allocation percentages. This is now ready to be joined with cost data in Step 5.
5. Join Costs and Calculate Attribution
Now that you’ve built the Resource→Solution mapping table, it’s time to apply real dollars. This is the step where your resource-level mappings become Solution TCO in financial terms. The principle is simple: every dollar of Sub-Tower cost must be traced to a specific Solution, or explicitly called out as Shared/Unassigned.
This step is where credibility is earned or lost. If your math doesn’t reconcile or if costs appear in multiple places, stakeholders will lose trust. Treat reconciliation as a mandatory QA gate, not an optional exercise.
5.1 Choose cost basis
Determine how you’ll source costs for each resource class:
- Direct resource costs (preferred): Cloud line items, asset depreciation schedules, lease costs, or support contracts tied to a resource ID. These provide the cleanest attribution.
Apportioned Sub-Tower costs (fallback): When resource-level costs don’t exist (e.g., internal servers tracked only at the pool level), allocate the Sub-Tower total down to resources first, then back up to Solutions.
Callout – Don’t mix bases: Pick one cost basis per Sub-Tower. Mixing direct billing data with apportioned costs will distort results.
5.2 Compute allocations
Apply the mapping table to costs:
- Single-Solution resources: Allocate 100% of the cost to the mapped Solution.
- Multi-Solution resources: Use allocation_pct from the mapping table to split costs proportionally.
- Shared/Unassigned resources: Attribute 100% to the Shared/Unassigned bucket until remediated.
5.3 Reconcile to Sub-Tower totals
Run a reconciliation check for every (tower, sub_tower, period):
- Sum of all resource-attributed costs (including Shared/Unassigned) must equal the Sub-Tower total.
- Allow only minor rounding differences.
Formula:
If costs don’t reconcile, review for:
- Missing resources (costs not mapped).
- Double entries (resource in both direct cost and apportioned pool).
- Incorrect allocation percentages (sum ≠ 100%).
5.4 Anti-double-count QA
Double counting is the #1 risk at Run maturity. To prevent it:
- Suppress overlap: If a resource is mapped at the instance level, exclude it from any element-level (Walk) allocation for the same cost slice.
- QA query (SQL-style):
SELECT resource_id, COUNT(*)
FROM resource_solution_mapping
GROUP BY resource_id
HAVING COUNT(*) > 1 AND SUM(allocation_pct) <> 100;
This flags resources that are mapped multiple times incorrectly.
Callout – Universal Reconciliation Rule:
For every (tower, sub_tower, period):
5.5 Roll up to Solution totals
Once reconciled:
- Aggregate by solution_id → produce total Solution TCO.
- Optionally roll up by Offering, Solution Category, or Solution Type (if your catalog supports it).
- Produce unit cost metrics (e.g., $/user, $/GB-month) if usage data is available.
Tip – Traceability matters: Always retain the ability to drill down from Solution totals → Sub-Tower → resource. This is what makes Run defensible to auditors and credible to stakeholders.
Output of Step 5: Reconciled, dollarized allocations where every resource’s cost is attributed to a Solution (or Shared/Unassigned). Totals tie exactly back to Sub-Tower costs, no streams are double counted, and Solution TCO is ready for validation with stakeholders.
6. Validate with Stakeholders
Even with perfect reconciliation, a Run-level mapping isn’t complete until the people who own and consume the technology have reviewed it. Validation ensures your attribution is not just mathematically correct but also business-credible. It’s also your chance to build trust—if Solution owners and Finance see how costs flow, they’ll be more willing to use the outputs for decision-making.
6.1 Prepare a review pack
Summarize your results in a way that stakeholders can digest without needing to comb through raw tables. Include:
- Inputs: Which Towers/Sub-Towers, resource types, and cost bases were included.
- Methodology: How resources were attributed (tags, CMDB links, usage splits, manual overrides).
- Outputs: Solution TCO totals, Shared/Unassigned amounts, and unit cost metrics if calculated.
Limitations: Known gaps (e.g., “5% of cloud resources untagged, targeted for remediation”).
Tip – Tailor the pack: Finance cares about reconciliation and coverage; Solution owners want examples of their mapped resources; platform leads want to see how Shared/Unassigned is being handled.
6.2 Invite the right stakeholders
Run-level attribution spans multiple domains, so invite a cross-functional group:
- Platform leads (infrastructure, cloud, storage, network).
- Solution owners (the business-facing services consuming resources).
- Finance / FP&A (to verify consistency with financial reports).
- TBM / FinOps admins (to speak to data lineage and rules).
This mix ensures that both supply-side accuracy and demand-side accountability are tested.
6.3 Walk through the evidence
In the review session:
- Show a few worked examples:
- A tagged resource attributed 100% to a Solution.
- A multi-Solution resource split by usage.
- An untagged resource flowing into Shared/Unassigned.
- Explain lineage clearly (e.g., “This VM is linked to ERP via CMDB relationship → Service ERP-Prod → Solution ERP”).
- Highlight reconciliation: prove that resource totals equal Sub-Tower totals.
This builds credibility that the model is systematic, not arbitrary.
6.4 Capture feedback and adjustments
Expect stakeholders to raise concerns—treat them as inputs, not obstacles:
- Adjust mappings if tags are incorrect or if CMDB relationships are stale.
- Update mapping_confidence fields where lineage is weaker, so transparency is maintained.
- Log data asks for systemic fixes (e.g., “tag policy update for storage buckets”).
Always bump the version of your mapping table after changes so the audit trail is clear.
6.5 Secure sign-off
End the review with explicit confirmation:
- Platform leads agree that resource attribution matches reality.
- Solution owners agree that TCO results are directionally accurate.
- Finance confirms that totals reconcile to Sub-Tower costs.
Capture approvals in meeting notes or governance tools (e.g., Jira, Confluence, SharePoint).
Output of Step 6: Stakeholder-validated mappings with documented feedback, updated versions, and explicit approvals. The model is now both mathematically defensible and organizationally trusted, ready for operationalization.
7. Operationalize & Manage Quality
Once Run-level mapping has been validated, the focus shifts to making it repeatable, automated, and governed. Manual runs can prove the concept, but without pipelines, QA, and governance, your model won’t scale or withstand audits. This step makes Solution attribution a sustainable business process instead of a one-off project.
7.1 Select your implementation approach
Decide how mappings will be maintained and refreshed:
- Pilot/low-volume: Excel or CSV files for proof-of-concept or small Sub-Towers.
- Intermediate: SQL queries or scripts (Python, R) to automate joins and calculations.
Enterprise-scale: ETL/BI tools (Alteryx, Power BI, Tableau Prep) or TBM platforms that integrate directly with cost, inventory, and CMDB sources.
Tip – Match tool to maturity: Don’t overengineer. If you have <1,000 resources, a warehouse + SQL might be enough. At >10,000 cloud resources, invest in automation pipelines.
7.2 Automate refresh cycles and QA
Create automated jobs that pull costs, inventories, and tags into your mapping pipeline each financial cycle. Layer in QA tests that run before outputs are published:
- Completeness: All in-scope resources have mappings (Solution or Shared/Unassigned).
- Integrity: No duplicate resource IDs; no allocation_pct >100%.
- Stability: Variance thresholds catch mix shifts (e.g., Solution allocation changes >20% month-over-month flagged for review).
- Coverage: Shared/Unassigned % trending down, not up.
Document these tests so auditors can see they’re enforced systematically.
7.3 Enforce governance and accountability
Run-level attribution only sustains if governance is built in:
- Tag enforcement: Require solution_id and environment at provisioning. Use automation (e.g., cloud policy engines) to block untagged resources.
- CI relationship SLAs: Set expiry/refresh rules for CMDB links; stale entries trigger alerts.
Exception handling: Maintain a Shared/Unassigned exception queue with owners and due dates for remediation.
Callout – Governance isn’t optional: Without enforced policies, Shared/Unassigned grows quietly and erodes credibility. Treat governance controls as part of your TBM operating model.
7.4 Manage resource lifecycle and history
Resources constantly enter and exit your environment. To keep mappings accurate:
- New resources: Ensure tags are applied or relationships established before they accrue cost.
- Decommissioned resources: Close mappings cleanly; don’t leave costs stranded in historical rows.
- Backfill tags: If tags arrive late, retroactively correct prior months’ mappings to keep historical reporting consistent.
- Version snapshots: Store each cycle’s mapping as an immutable version for auditability.
7.5 Protect sensitive data
Resource-level mapping can expose device or user-level detail. Apply privacy and security safeguards:
- Mask or hash sensitive identifiers (usernames, device serials).
- Aggregate where necessary (e.g., show Solution totals, not individual laptop costs).
- Follow data handling rules for personally identifiable information (PII) or regulated data.
Output of Step 7: A governed, automated mapping pipeline with QA checks, versioned outputs, lifecycle controls, and governance policies. Resource→Solution attribution is now repeatable, scalable, and secure, supporting monthly or quarterly financial close with confidence.
8. Publish, Measure, and Iterate
The final step in Run-level mapping is to make the work visible, actionable, and continuously improving. Publishing ensures stakeholders see the value of high-fidelity mapping. Measurement provides proof of accuracy and progress. Iteration turns mapping into a virtuous cycle, not a one-time exercise.
8.1 Publish Solution cost views
Tailor outputs to different audiences, ensuring each group gets information at the right level of detail:
- Finance leaders: High-level dashboards or PDFs showing Solution TCO, unit costs (per user/txn/GB-month), and trends vs budget.
- Solution owners: Interactive BI dashboards with drill-down into Tower/Sub-Tower and resource-level mappings. Highlight Shared/Unassigned to reinforce accountability.
- TBM/FinOps/IT Finance teams: Full mapping tables, with lineage metadata, driver sources, and QA results.
Tip – Don’t hide Shared/Unassigned: Keep unmapped costs visible as a distinct category. Transparency builds trust, even when data gaps exist.
8.2 Track maturity KPIs
Publish operational KPIs alongside cost reports so stakeholders see progress in data quality as well as financial outcomes. Examples:
- Direct attribution coverage: ≥90% of targeted resource costs mapped directly to Solutions.
- Shared/Unassigned rate: Trending downward over time (<5% is a common target).
- Reconciliation accuracy: Variance between Sub-Tower totals and aggregated allocations = 0% (± rounding).
- Tag/lineage completeness: % of resources with valid solution_id, environment, and data_source.
- Time-to-fix unmapped: Median number of days to resolve Shared/Unassigned items (target <30).
Callout – KPIs tell the story: Use them to show Finance and business stakeholders that TBM is not static; it’s actively maturing.
8.3 Establish cadence and iteration loops
Make publishing and measurement part of your regular operating rhythm:
- Monthly runs: Ideal for organizations that close financials monthly and need real-time cost visibility.
- Quarterly runs: Sufficient for slower-moving organizations but risks stale insights.
- Trigger-based refreshes: Re-run when major changes occur (e.g., reorg, taxonomy update, new tagging policy).
Each cycle, use outputs to fuel improvement:
- Identify high Shared/Unassigned areas and prioritize remediation.
- Highlight Solutions with large cost swings for investigation.
- Feed KPIs into governance forums (e.g., TBM Council, FinOps steering group).
8.4 Consumer handoff (optional but powerful)
If your organization needs business accountability, extend the Run outputs into Consumer mapping:
- Attribute Solution costs to business units, products, or portfolios using drivers such as users, headcount, or transaction counts.
- Show each BU not just what Solutions they consume, but the true cost of those Solutions down to resource level.
This closes the loop between IT cost transparency and business decision-making.
Output of Step 8: Published Solution TCO reports with resource-level fidelity, clear Shared/Unassigned visibility, tracked KPIs, and a repeatable refresh cadence. Stakeholders have both financial insights and confidence in the quality of the model, with a clear path to business accountability through Consumer attribution.
Acceptance Criteria (Run)
You know your Run cycle is complete — and ready for broad stakeholder consumption — when all of the following conditions are met:
- Resource→Solution mapping populated and versioned
- Every targeted resource type (servers, VMs, storage, cloud services, containers, etc.) is mapped to a Solution or explicitly routed to Shared/Unassigned.
- Mapping tables include governance metadata (solution_id, resource_id, mapping_method, mapping_confidence, lineage references, versioning, effective dates).
- ≥90% of targeted resource cost directly attributed
- At least 90% of the Sub-Tower costs in scope are attributed directly to Solutions through tags, CI links, or usage splits.
- The remaining costs are visible in Shared/Unassigned with remediation plans and owners.
- Exact reconciliation to Sub-Tower totals
- For every (tower, sub_tower, period), the sum of all Solution allocations plus Shared/Unassigned equals the source Sub-Tower cost (± rounding).
- A reconciliation check is documented and retained for auditability.
- No double counting
- Resource-level attributions are excluded from parallel element-level allocations.
- A QA rule confirms that each cost stream appears once and only once in the model.
- Transparent Shared/Unassigned bucket
- Unmapped resources and costs appear as their own line items.
- The % of Shared/Unassigned is published, tracked, and trending downward over time (<5% is a common maturity target).
- Stakeholder validation and sign-off
- Platform leads, Finance, TBM admins, and Solution owners have reviewed mapping logic, validated outputs, and approved directional accuracy.
- Sign-offs are recorded with version notes.
- Automated refresh with QA in place
- Pipelines or processes run at least monthly (or at the financial close).
- Data quality tests (e.g., unmapped resources, duplicate IDs, allocations >100%, sudden mix shifts) are automated.
- Failures are logged and resolved within SLA.
- KPIs published and tracked
- Coverage, Shared/Unassigned rate, tag/lineage completeness, reconciliation deltas, and time-to-fix metrics are published alongside Solution TCO.
- Trends are reviewed in governance forums to demonstrate improvement.
- Outputs delivered to stakeholders
- Solution TCO and unit cost views are published in appropriate formats for Finance, Solution owners, and TBM/FinOps teams.
- Shared/Unassigned is clearly displayed with remediation paths.
Quick Tip: Run success isn’t about perfection. Even in high-maturity models, some fallout will remain. The measure of success is transparency, accuracy, and repeatability — with clear evidence that the Shared/Unassigned portion is shrinking over time.
Join the TBM community: where innovators and leaders converge
The TBM Council is your gateway to a treasure trove of knowledge: think cutting-edge research papers, insightful case studies, and vibrant community forums where you can exchange ideas, tackle challenges, and celebrate successes with fellow practitioners.
We’re calling on organizations and forward-thinking individuals to dive into the TBM community. Participate in our events, engage in our discussions, and tap into a vast reservoir of knowledge. This isn’t just about networking; it’s about contributing to and benefiting from the collective wisdom in navigating the dynamic world of cloud computing.