2015-09-19 18_08_29-SIGSPL Cliparts - Google Präsentationen

CTO Advisory: towards a benchmark for software quality you could show your CEO because it creates value through sustainability

In large IT organisations focused on software development, its change management and operation, leaders invest considerable efforts to collect information regarding status and quality of what such IT organisation delivers.

This efforts come over and over, in form of daily, weekly and monthly reporting.

In this article, we discuss two approaches, how you could drastically reduce costs of these reports and upgrade your teams to produce way more sustainable value.

First: find out what is really happening using a simple ISO/IEC based systematic benchmark, focused on efficiency of labour organization!

Second: implement state of the art automation and technology-focused reporting approaches of labour organisation to increase efficiency of your value chain and reporting.

Create transparency with an ISO/IEC based benchmark!

Existing reporting structures have a pitfall which is often hidden, or even misunderstood by top management. Due to lack of communication between teams, a silo nature of pre-DevOps IT organisations, and of course rapidly growing complexity and heterogeneity of software landscapes, presumably nobody can tell you that a software component X at time Y has a quality benchmark score Z across of the complete value chain.

Although there are daily, weekly and monthly reports on invested time, money, performed tests, e.g. “199 of 200 tests complete”, a very simple question remains unanswered:

What is the quality of your software product, at least by your company’s standards?

How do you know whether your software scales with same efficiency as your competitors’? (Well, the simplest but not the cheapest empiric way to find out this one could be to observe who goes bankrupt first).

A good way to start from here is [Dirlewanger2006]. Dirlewanger assesses ISO/IEC 14756 and asserts that before measuring quality, you need to have a management attitude. Simply put, which is the minimum quality level sufficient to make your software product acceptable?

Dirlewanger, 2006.

The following diagram might cause hot debating in your office, if you ask colleagues about their attitude which technical quality level of software is considered sufficient. One thing is to say “we have measured a metric value of 42” but another one is to have an alignment, which values helps your customer IT business prosper and which not.

Following the official EU labeling92/75/EC as a common known way of visualizing efficiency excellence: are you happy with an efficiency level of C, or only A would bring your product beyond competitors’ offer?

attitude_to_sufficiency

Defining such a level is a key aspect defined by the company culture and the urgency of the project itself. Otherwise, you might not be able to be realistic about required investments and this is exactly the reason why so many large IT projects turn into a never ending nightmare, and why very expensive software systems mutate to an non-maintainable mess.

Yes, it’s just that simple: competitive technical quality costs money. Same applies to software products! Now, how do we prove that? And how do we make decisions?

Indeed, what is easily recognisable in the auto industry, e.g Porsche or even Mercedes cars (try to get a low-cost but enjoyable Mercedes!), gets complicated with software – nobody can see it, and there are no regular obligatory safety checks comparable to car manufacturing.

So how do we address the inefficiencies in software development?

Let’s consider the following analogy: Most people have a clear understanding of what is the price difference between energy efficiency classes of devices say A+++ and C. This has nothing to do with kitchen appliances design or features, but with the TCO (total cost of ownership) during operation regarding energy consumption. Buy a cheap refrigerator – pay more for energy.

Bringing it to Software Development

We propose to introduce similar concepts in the IT world – generic software quality benchmark. If we shape software investment based on the conscious decision of the desired quality level, we have a clearer expectation of what is needed to invest for maintenance and change management.

Of course, there are several variables at play. Depending on the platform you use, and the seniority of your team, you would find out which are the costs of A+++ for a specific project. Nevertheless, if you can’t afford it, you might be able to choose between say B or C; not addressing this aspect at all doesn’t eliminate software quality from this world, but would rather drive it to somewhere beneath G without you knowing it. You could produce software product with amazing user experience and brilliant features, which nobody can use… However, because nobody invested time to assess scenario of unexpected peak in user sessions, you’d still be in the red. On its own that performance problem is sad enough, especially if this peak is result of an expensive strategic advertising campaign 😉

In case you have decided to outsource “IT services” or buy a product, how do you know which quality are you paying for? It’s simple! Make transparent quality benchmark reports a mandatory part of your buying process due diligence. Bringing a higher quality level to your desired product, as well as aiding in shrinking the candidates list for your software development.

Putting it in practice

So let’s say,  you have implemented the benchmark, and the result of the assessment shows an average benchmark score of C. You would like to have A, but due to budget this year and also considering the continuous improvement principle, you decide to start working towards a desired quality benchmark level B. Then, would like to know what is the price tag. Where to start – are there some common criteria to audit?

Labour automation led once to industrial revolution – today we’ve got  DevOps

We think, that the answer is definitely yes, DevOps is the way to go! If you recognise that the labour process across all IT organisations looks very similar to manufacturing world – agile methods just drive its clocking, and in most cases the value chain will be very similar. Let’s step back to see what happens in a typical IT company with structured processes:

  • Business Designers and Enterprise Architects elaborate documentation from high-level architectural decisions down to specific Change Requests and in agile teams down to user stories;
  • Software Architects and Engineers use, in a best case scenario, very specific descriptions of what the system should do to translate it into solution designs and finally source code. Again in a best case scenario, they will also write unit tests following formal requirements;
  • Testers prepare test models and scenarios to run them on what comes from development teams;
  • Operations try to understand what they need to know and how to do it, in order to deploy and operate software; first problems appear here as there are clear requirements towards performance design; same for IT security, which has, however, recently improved.

This model is a simplified description of software engineering manufactory as we know it. Although there is some automation, here and there, there is no consistent industrial automation concept across the whole value chain.

Good news is that currently hyping DevOps methodology aims to close this gap. (Provocative homework: does in your opinion hiring somebody featured as DevOps specialist solve the problem? Why?)

Now let’s put it all together

Following today’s best practices, the core of a software engineering factory implementing DevOps strategy, must be support and based on top of Continuous Delivery solutions (e.g. Jenkins, Bamboo, or Microsoft Team Foundation Server). Software that will integrate, in the near future, more and more of the whole IT value chain. Currently existing tools, allowing any company to design a software quality benchmark, such as the following (oversimplified view):

simple_benchmark1

The nice thing is that, while this benchmark is product engineering oriented instead of being service oriented, reporting will work like a charm. Instead of collecting data through tedious controlling work, you can just directly collect the real life data daily or even hourly from your system environment and display it instantly in your company head quarters. With as much granularity as needed for each stakeholder: e.g. the project as a whole for higher management perspective; per software component, designed for software development leads; per web service endpoint, for integration and operations.

Conclusion

We think, that today’s technological development brings development and operation together. We recommend CTOs to use a systematic benchmark used on an ISO standard to unleash stakeholders’ attitudes towards desired investments in technical quality of software and implement methods disrupting classic mid-management reporting institution.

Leave a Reply

Your email address will not be published. Required fields are marked *