Tutorial

 

Pete Rotella

Title: Improving Software Reliability in a Changing Industry

Abstract: Achieving high software quality is essential. Interoperability has never been so important, release velocity needs to meet time to market goals, high function and system availability are expected, customers' quality expectations have never been so high. To adapt, compete, and hopefully excel in this increasingly fast-paced industry, we must not only provide functionality to the market quickly, but also prove to customers that they are receiving a high-value product - the best way to do this, in nearly all cases, is to build and release products of exceptionally high quality, that is, high availability, reliability, security, usability, and other key quality attributes.

Quality management has become increasingly important because the software development lifecycle is often quite different from what we were used to even a decade ago. Scaled agile continuous integration, continuous deployment, automated testing, devops: in this changed (and changing) world, we cannot depend on the proven waterfall/hybrid ways that were routinely used to gauge reliability, for example, but need different (yet still effective) approach(es) to understand where we stand. So, how do we configure our quality management program to help us to substantially improve reliability? Same applies for availability, security, etc. (We'll touch on another interesting topic, too, in the symposium, arguably an essential quality consideration: the types of service requests companies deal with (often in high volumes) and ways to identify systemic issues that appear in the service request population.)

So, “how do we construct and implement a transformative program that can help us elevate the quality of new releases to best-in-class levels (i.e., the top 10% of the pertinent industry sector)”? Here, most of our discussion will center on reliability/ availability, usually the most important driver of customer software satisfaction.

We'll begin by describing the use of three types of software metrics: In-process, customer experience, and customer sentiment (satisfaction) metrics. Each metric type characterizes an important part of the software lifecycle. We will also describe our experience in developing new and useful metrics, and show how we typically “link “/correlate in-process measures and metrics (from development and test) to customer experience and customer satisfaction metrics, which in turn invariably correlate strongly to revenue, market share, and profit. We'll discuss the steps involved in choosing the most valuable metrics, setting goals for these metrics, and go into practical ways of using them to help in improving development and test practices and processes.

For example, an important use of in-process metrics is to construct mathematical models that predict software reliability and other key quality attributes. These models are needed to enable software practitioners to identify deficient (and superior) development and test practices.

In addition to the data science/machine learning aspect of quality management, organizations also need to develop goaling strategies to enable release-over-release improvement. Reaching best-in-class reliability, for instance, becomes progressively more difficult as we approach this top level - more focus is needed in grounding strategy in an approach that has a strong conceptual and quantitative basis. Therefore, we need historical baselining and industry/ internal benchmarking to establish a basis for goaling the metrics associated with the various practices, processes, and tools.