Updated May 2026
Methodology
How TechStackCost.com verifies per-employee SaaS spend benchmarks, per-engineer ranges, hidden-cost multipliers, and cloud reference rates. Every number on the site should be re-derivable from the cited public source using the formulas shown here.
Sources
Each cost reference on the site cites the originating source: research publication for per-employee benchmarks and waste rates, vendor public pricing page for tool list prices, FinOps Foundation for category framing, and BLS for the labour reference when converting per-engineer tooling cost into total-cost-of-engineer math. Aggregator restatements are not used as primary citations.
| Source | Refresh cadence | What we take from it |
|---|---|---|
| Zylo SaaS Management Index | Annual + quarterly updates | Per-employee SaaS spend benchmark (the $4,830/employee anchor on the homepage and SaaS-spend-benchmarks page), waste-rate anchor (30 percent), shadow-IT prevalence (one-third of SaaS apps). Methodology is published with each report. |
| Productiv State of SaaS | Annual | Industry breakdown for per-employee SaaS spend (the tech $6,200-$8,500, finance $5,800-$7,200, manufacturing $3,200-$4,500 ranges). Application portfolio size benchmark (the 275-apps-per-company figure). |
| Gartner IT Key Metrics Data | Annual | IT-spend-as-percent-of-revenue benchmark (the 3.28 percent overall, 7-8 percent financial services, 5-8 percent technology, 4-6 percent healthcare, 1.5-3 percent manufacturing ranges). Source uses CIO survey aggregated by industry. |
| Flexera State of the Cloud Report | Annual | Cloud waste rate (the 28-35 percent anchor), right-sizing savings (the 20-35 percent quick-win figure), reserved-instance and savings-plan adoption rates, and the per-organisation cloud spend distribution. |
| Stack Overflow Developer Survey | Annual | Developer tool adoption rates by language, framework, and category. Used to weight the developer-tooling category coverage so the most-adopted tools (VS Code, JetBrains, GitHub, Slack, Jira) lead each table rather than the most-expensive. |
| JetBrains State of Developer Ecosystem | Annual | Complements Stack Overflow on IDE share, language usage trends, and adoption of AI code assistants. Cross-checked against survey-of-developers reports for category framing on the developer-tools page. |
| FinOps Foundation publications | Continuous | FinOps lifecycle framing (visibility, optimisation, forecasting, continuous improvement), cloud cost governance principles, and the typical-savings benchmarks used on the optimization page. Authoritative for FinOps category framing. |
| Vantage Cloud Cost Reports | Quarterly | Cross-vendor cloud rate aggregation for AWS, GCP, Azure spot and on-demand pricing trends. Used as a cross-check on vendor public pricing pages on the cloud-costs page; not the primary source. |
| CAST AI cloud cost research | Annual | Kubernetes-specific waste and right-sizing data used on the cloud-costs page. Cross-referenced against Flexera for general cloud waste-rate anchors. |
| BLS Occupational Employment and Wage Statistics | Annual | Software engineer base wage anchors (15-1252 Software Developers, 15-1253 Software Quality Assurance, 15-1244 Network and Computer Systems Administrators) used as the labour reference when converting per-engineer tooling cost into total-cost-of-engineer framing. We do not republish wages on-page; we cite BLS as the methodology anchor. |
| AWS public pricing pages | Monthly | EC2 on-demand instance rates, EBS storage, S3 storage, data transfer (egress) pricing. US-East-1 list price is the reference; spot, savings plans, and reserved capacity are out of scope to keep cross-vendor comparison apples-to-apples. |
| Google Cloud public pricing pages | Monthly | Compute Engine on-demand rates, Cloud Storage, BigQuery on-demand pricing, network egress. Used on the cloud-costs and data-platform-cost pages. |
| Microsoft Azure public pricing pages | Monthly | Virtual Machines on-demand rates, Blob Storage, Azure SQL, bandwidth pricing. Used on the cloud-costs page. |
| Datadog, Snowflake, GitHub, Linear, Notion, Figma, Slack public pricing pages | Monthly | Per-tool published list prices and per-seat tiers. Updates are pulled from each vendor's own pricing page; aggregator restatements are not used. Specific brand-pricing deep-dives are deliberately not built as standalone pages on this site to avoid the trust ceiling that comes with brand-named cost pages. |
In scope
- ●Vendor-published list prices on each vendor's own public pricing page.
- ●Per-employee SaaS spend benchmarks from named research sources (Zylo, Productiv, Flexera) with the citation visible on every page that uses the number.
- ●Per-engineer stage bands derived from the same research applied to engineering-organisation headcount.
- ●Hidden-cost multiplier (1.4-1.6x on top of visible spend), anchored to published industry research and shown with a worked example.
- ●Waste-rate anchors (30 percent SaaS, 28-35 percent cloud) from named sources, cited inline.
- ●Cloud reference rates at US-East-1 list price; deployment-model TCO comparisons using the same on-demand baseline.
- ●FinOps framing (lifecycle phases, governance principles, typical-savings ranges) cross-referenced against the FinOps Foundation.
Out of scope
- ●Enterprise-negotiated pricing. Volume commitments, contract minimums, custom MSAs, and EDP-class discounts are deliberately excluded.
- ●Spot, savings plan, and reserved-capacity discounts on AWS, GCP, Azure. Cloud rate comparisons use on-demand list price to keep cross-vendor math consistent.
- ●Regional surcharges beyond US-East-1 / generic SaaS. AWS pricing varies 10-20 percent by region; the comparison uses US-East-1 unless flagged.
- ●Brand-named pricing pages (a dedicated /aws-pricing, /datadog-pricing, or /snowflake-pricing page). These attract brand-trust-ceiling competition the build cannot win and risk Lanham Act exposure that has bitten sister sites in the security cluster.
- ●Internal loaded-rate estimates for hidden-cost components beyond the published industry range. The 1.4-1.6x multiplier is the published anchor; teams should substitute their own organisation-specific number when re-running the math.
- ●Lead-generation forms, gated content, or affiliate links. The site is a reference, not a funnel.
Calculation framework
Sourced from Zylo SaaS Management Index ($4,830/employee/yr overall anchor), with industry splits cross-referenced against Productiv. Where the two sources differ by more than 15 percent, the page shows the range rather than picking a point estimate. Methodology difference (Zylo measures invoiced SaaS spend per FTE; Productiv adds shadow-IT estimates) is flagged on the SaaS-spend-benchmarks page.
Stage bands derived from the typical tool stack at each company stage (seed through enterprise) applied to engineering-organisation headcount. Each band has a worked example showing component breakdown: cloud allocation per engineer, IDE and CI/CD per seat, monitoring share, data platform allocation, security tooling, collaboration tools. The example on the cost-per-engineer page shows the math so the band is verifiable.
The published industry range for additional cost on top of visible vendor invoices. Composed of training and onboarding ($3,000-$25,000 per engineer per tool depending on complexity, sourced from corporate-training industry publications), maintenance burden (8-15 percent of engineering capacity, sourced from DevOps research and engineering-blog post-mortems), vendor lock-in exit cost ($500,000 to $10M+ for major migrations, sourced from published case studies), and technical-debt velocity impact (15-25 percent reduction, sourced from engineering-productivity research). The hidden-costs page shows the worked example: $100K visible monthly becomes $140K-$160K total.
The 30 percent SaaS waste rate is the published industry anchor (Zylo, Productiv, BetterCloud all converge on this number). The 28-35 percent cloud waste rate is the Flexera anchor. Both numbers are presented as published ranges. The optimization page applies these anchors to a worked example showing typical savings from a structured audit (25-30 percent total stack reduction, 60-70 percent of which comes from the first round).
AWS EC2 on-demand US-East-1 is the reference instance pricing on the cloud-costs page. Google Cloud Compute Engine on-demand US-Central-1 and Azure Virtual Machines on-demand East-US-2 are presented at matched instance sizes. Egress pricing is shown for each cloud at the standard internet-egress rate, with regional and inter-region rates flagged where they differ materially. Spot, savings plans, and committed-use discounts are explicitly out of scope.
The seed, Series A, Series B, growth, and enterprise stage bands on the by-company-size page use a typical engineering-team size at each stage (1-5, 5-20, 20-80, 80-300, 300+ engineers), apply category-specific tool list prices at that headcount, and roll up to a monthly total plus per-engineer cost. The example in each card shows the actual line items so the band is verifiable.
Refresh cadence
Vendor list prices and the per-employee / waste-rate benchmark citations are re-verified on the first business week of each month. The visible “Prices verified” label and the dateModified field in every page's Article JSON-LD read from a single constant (LAST_VERIFIED_DATE) so on-page text, schema, and footer always agree. Cosmetic date refreshes are structurally impossible: bumping the date is a single-line change that touches every page at once.
Out-of-cycle refreshes trigger on:
- ●Annual benchmark publication from a primary source (Zylo, Productiv, Flexera, Gartner) that shifts the anchor number.
- ●Vendor public pricing-page change that moves a list price by more than 10 percent on a referenced tool.
- ●Major cloud rate change at AWS, GCP, or Azure on a reference instance type or egress tier.
- ●New billing model from a referenced vendor (per-seat to usage-based migration, or vice versa) that changes how the cost is calculated.
- ●Substantive correction reported via the corrections inbox that updates a published figure.
Refreshes that move a per-employee benchmark or vendor list price by less than 5 percent batch into the next monthly pass. Refreshes that introduce a new billing model, shift an anchor benchmark by more than 10 percent, or correct a substantive error ship as soon as the change is confirmed against the originating source.
Limitations
Calculator and benchmark outputs are estimates. Production tech stack cost depends on enterprise agreements, EDP/MSA committed-use programs, regional surcharges, reserved-capacity commitments, and per-workspace tier negotiations that are out of scope here. Always verify with the vendor before purchasing or making a budget commitment.
The hidden-cost multiplier (1.4-1.6x) is a published industry range, not a point estimate. Organisations with mature platform engineering and tight tool governance can land closer to 1.2-1.3x; organisations with significant tech debt, fragmented tooling, or recent migrations can sit at 1.7-2.0x. The range shown on the hidden-costs page is the published average; team-specific math should substitute the team's own loaded engineering rate and known maintenance burden.
Per-employee SaaS benchmark sources (Zylo, Productiv, Flexera, BetterCloud) use slightly different methodologies (invoiced spend versus total estimated spend including shadow IT; new-hire-month versus tenured-employee normalisation; industry-mix weighting). Where the methodologies materially diverge, both anchors are shown on the page so the reader can choose the comparison most relevant to their organisation.
Corrections process
Spotted a stale price, a missing tier, a research source we have not cited, or a methodology divergence we have not flagged? Email [email protected] with the page URL and the source you would like cited. Substantive corrections (benchmark shifts, vendor pricing changes, new billing models) are typically actioned within five business days. Non-substantive corrections (typos, link rot, structural edits) batch into the next monthly pass.
See also the about page for the site's editorial position, disclosures, and full coverage map.