Innovating at startup speed in a software-driven world

The Built to Adapt Benchmark is a quantitative framework of indicators that gauges how well an organization builds and operates software. Pivotal partnered with Longitude Research and Ovum to assess more than 1,600 of the world’s top organizations in six countries and across five industries. We’ll be releasing it to the world in June 2018, but you can sign up for early access to review the benchmarks and see how your company measures up.

Overall Key Findings

Built for cloud or stuck with legacy systems.

Building for the cloud allows applications to scale easily. Of the organizations polled globally, only 37% of apps were built, or have been refactored, to run in the cloud.

Substandard software quality.

Organizations polled report an average of 20% of software launches and upgrades were delayed due to defects.

Security concerns limit progress.

In the last year, 33% of US organizations surveyed delayed application releases more than 11 times due to security concerns, as opposed to 21% globally.

Technology does not deliver business value.

Of organizations polled globally, only 35% of developers' time is spent writing code for new products or features to deliver value as opposed to maintenance or fixing old code.

Methodology

Designing The Built to Adapt Benchmark

Built on decades of experience working with world-leading disruptors and established businesses, The Built to Adapt Benchmark aims to provide a framework for assessing and discussing the performance of a given company, market or sector. This is based on some of the most widely accepted standards of best practice.

For example, the developer to application ratio. The fewer developers are running each application, the more resources an organization has available for writing new code, features and applications elsewhere.

The Methodology

The research, carried out by research partners Longitude and Ovum, assessed organizations in 6 markets: the United States, United Kingdom, Germany, Australia, Japan and Singapore and across 5 industries– Banking, Insurance, Automotive, Retail and Telecommunications.

Assessments were based on interviews with 1,659 IT executives working at firms in the above sectors and with over 90% reporting an annual revenue of more than $100 million USD, carried out between between August 2017 and February 2018. Job titles were limited to: CIO, CTO, CISO, Chief Digital Officer, Head of Software Development and IT Director, as well as IT Manager and IT Engineer.

Based on their interview responses, respondents were scored from 1 to 10 on each indicator, where 10 represents best practice.

To derive the final index ranking a weighting was applied to each question in the survey measuring facets of each indicator to ensure equal representation and validity.

Samples & Margins of Error

The sampling frame has been designed so that results are comparable by sector or by country. The margin of error for all sectors is 5.2% at the 95% confidence level except for Automobiles for which the margin of error is 6.1% – this is because of the lack of automotive industry businesses in Australia and Singapore reduced the overall sample. The margin of error by country ranges from 5.2% in the United States to 6.9% in Australia and 6.9% in Singapore at the 95% confidence level.