Category Archives: Modelling

The formation of hypotheses about some aspect of the modeller’s experience of the world.

Evaluating platform architectures within ecosystems: modeling the supplier’s relation to indirect value

by Philip Boxer, PhD

I have completed a PhD by publication at Middlesex University’s School of Engineering and Information Science under the supervision of Professor Martin Loomes.  Here is its abstract:

This thesis establishes a framework for understanding the role of a supplier within the context of a business ecosystem. Suppliers typically define their business in terms of capturing value by meeting the demands of direct customers. However, the framework recognises the importance of understanding how a supplier captures indirect value by meeting the demands of indirect customers. These indirect customers increasingly use a supplier’s products and services over time in combination with those of other suppliers . This type of indirect demand is difficult for the supplier to anticipate because it is asymmetric to their own definition of demand.

Customers pay the costs of aligning products and services to their particular needs by expending time and effort, for example, to link disparate social technologies or to coordinate healthcare services to address their particular condition. The accelerating tempo of variation in individual needs increases the costs of aligning products and services for customers. A supplier’s ability to reduce its indirect customers’ costs of alignment represents an opportunity to capture indirect value.

The hypothesis is that modelling the supplier’s relationship to indirect demands improves the supplier’s ability to identify opportunities for capturing indirect value. The framework supports the construction and analysis of such models. It enables the description of the distinct forms of competitive advantage that satisfy a given variety of indirect demands, and of the agility of business platforms supporting that variety of indirect demands.

Models constructed using this framework are ‘triply-articulated’ in that they articulate the relationships among three sub-models: (i) the technical behaviours generating products and services, (ii) the social entities managing their supply, and (iii) the organisation of value defined by indirect customers’ demands. The framework enables the derivation from such a model of a layered analysis of the risks to which the capture of indirect value exposes the supplier, and provides the basis for an economic valuation of the agility of the supporting platform architectures.

The interdisciplinary research underlying the thesis is based on the use of tools and methods developed by the author in support of his consulting practice within large and complex organisations. The hypothesis is tested by an implementation of the modeling approach applied to suppliers within their ecosystems in three cases: (a) UK Unmanned Airborne Systems, (b) NATO Airborne Warning and Control Systems, both within their respective theatres of operation, and (c) Orthotics Services within the UK’s National Health Service. These cases use this implementation of the modeling approach to analyse the value of platforms, their architectural design choices, and the risks suppliers face in their use.

The thesis has implications for the forms of leadership involved in managing such platform-based strategies, and for the economic impact such strategies can have on their larger ecosystem. It informs the design of suppliers’ platforms as system-of-system infrastructures supporting collaborations within larger ecosystems. And the ‘triple-articulation’ of the modelling approach makes new demands on the mathematics of systems modeling.

The following summarises the argument in terms of Value for Defence:

Value for Defence

View more presentations from Philip Boxer

On the naming of parts

by Bernie Cohen

The field of modeling is rich in terminological confusion and misunderstanding, in which some of the terms have formal definitions that are radically different from their everyday usage. An eminent MIT Professor of Engineering used to introduce his students to the subtle concepts of precision, accuracy and significance with the following (non-PC) example.

You ask a lady her age and she tells you she is 35. This statement has a precision of plus or minus 6 months, could be inaccurate by as much as 10 years and, if she is attractive, has no significance whatsoever.

What follows is an attempt to cast some light on the terminological confusion and misunderstanding.

  • In mathematics, a theory is an abstract algebraic structure, with a signature defining the syntax of its sorts and operations and a set of axioms defining equivalence classes over its syntactically valid expressions. The assertion that certain statements are, or are not, in the same equivalence class is a theorem, which may be proved within the formal system in which the theory is expressed. A theory is said to be consistent if no two of its theorems contradict each other and complete if every valid expression in it is provably true or false. (Godel’s Incompleteness Theorem raises its ugly head here: no theory powerful enough to define arithmetic can be both consistent and complete.).

This definition may seem completely meaningless to the engineer, or even to the scientist, but theories are, indeed, devoid of any ‘meaning’ in the sense that they do not refer to anything in the world of experience.

  • A model, in this context, is a theory morphism that assigns a set to each sort, and a function to each operation, of a theory, in such a way that all the axioms of the theory hold in the model. That such a morphism does or does not constitute a model of the theory is a theorem.

Still not very meaningful until we interpret the sets in our models as referring to the values taken by certain kinds of things in the world and the functions as referring to the behaviours of those things. Now our model becomes a theory (in the scientific sense) of any part of the world in which things of those kinds occur.

Actually, such a model of the world initially has the status of a hypothesis, which must be rigorously tested before graduating to the status of theory. Hypothesis testing is the foundation of scientific method. Unlike mathematics, whose methods deal largely with verification using formal proofs, scientific method works largely with refutation, the demonstration that a hypothesis is false. Sir Karl Popper went so far as to insist that any hypothesis that was not, in principle, refutable could not be deemed to be scientific at all, thereby excluding astrology from the sciences.

Hypothesis refutation involves a combination of mathematical proof and empirical observation. The method is to prove a theorem in the underlying theory whose interpretation predicts certain as yet unobserved behaviour of things in the world. An experiment is designed, using suitable effectors, sensors and instruments, to induce the predicted behaviour. This procedure is effectively an investigation of commutativity. Given any function in the model and any value in its domain, we may execute that function on that value and then interpret the result in the world. Alternatively, we could first interpret the function and its input value and observe how that part of the world behaves. If the theory and its interpretation commute, then these two procedures should always produce the same result.

If the predicted behaviour is not observed, non-commutativity has been demonstrated and the hypothesis is deemed to be false. If, however, the behaviour is observed, we have not proved commutativity, but merely that the interpretation of that particular function with that particular value gives an accurate account of the observed behaviour. The hypothesis is not deemed to be true, but merely to have survived that test. A hypothesis that has survived many such tests, repeated by different experimentalists in different conditions, especially where the predicted results are, in some cultural sense, surprising, is eventually granted the status of a law, although that terminology has fallen into disuse.

Actually, hypotheses are not cast and tested individually. Rather, collections of them are interrelated and stand or fall together as a body of theory that defines and governs some branch of science. Their interrelationship stems from their shared ontology, the way that they identify and distinguish things in the world. Contrary to popular belief, the world is not ontologically prior, that is, empirical reality does not consist of distinct objects. Rather, each of us impresses upon our experience of empirical reality our own ontology, constructed to suit our purposes and circumstances. For example, the Inuit distinguish many more kinds of snow than do other North Americans because they both experience much more of it and have to make their living through it (literally). As W. V. O. Quine put it, ‘to be is to be the value of a variable’.

It was once believed, by Plato, Porphyry, Leibniz and other eminent philosophers, that there was a universal ontology, that is, that all the things in the world could be uniquely classified under a hierarchical scheme of differentiation. Sadly, those days have gone. Even if such an ontology was proposed, its universality could not be verified. Science has already encountered many instances of ontological change. For example, for many years, chemistry recognised a type of thing called phlogiston which was given off when a combustible substance was burned, and another called calx which was the residue left after the departure of the phlogiston. Now we have no place for phlogiston in our ontology but we can retain calx as referring to the oxidised residue of burning.

So in order to communicate among scientific disciplines, we must find ways of composing our different ontologies, finding mappings among their terms that do not violate any of the theories on which the laws of the various disciplines rest.

One way of doing this is to construct a GUT (Grand Unifying Theory), or TOE (Theory of Everything), which is indeed a major project in physics today. The problem has been to reconcile general relativity and quantum mechanics, hypotheses which are as close to laws as anything in science but which, unfortunately, contradict each other. The approach, as it has been on many other occasions, is to introduce new ontological distinctions ‘underneath’ those of the competing hypotheses, which can then be re-expressed, and subsequently successfully composed, in the new ontological structure. In this case, the elements of the new ontology are strange things indeed. Just as we had got used to fundamental ‘particles’ that were also ‘waves’ and could be observed only for fleeting moments at ridiculously high energies, we were presented with their unobservable components, the quarks, and with the quarks’ inconceivable components, 14-dimensional strings.

But although a successful TOE would reduce physics to a single theory, we would not reduce our accounts of our empirical experience to its ontology. The reductionist programme was once a cornerstone of scientific philosophy but those days are long gone and engineering was largely responsible.

The ‘I’ of the beholder

by Philip Boxer
Charlie, Thank you for your last response about distinguishing the impact of differences in context. In Richard’s reply about distinguishing the 3rd asymmetry, he emphasized the dilemma that each asymmetry presents for the supply-side, each asymmetry presenting its own challenge for how the relationship with demand is managed.
But I wanted to pick up on your observation about motivation drivers:

    “My hypothesis is that differences in makeup from person to person have less of an impact on their wants and behavior than the differences in the context that they find themselves in.”

This is okay as long as you can separate the motivation drivers from the way the person ‘reads’ their context. The difficulty is that this is often not possible without imposing some a priori view of the person’s construction of their situation – practical when looking for common perceptions of value of cars or health club services across a number of consumers, but not so practical if considering the particular way in which family situation affects wealth management needs, or the way in which the client business’s strategy affects the way they consider using ICT. It comes down to what you can safely ignore as a supplier.
The thing that distinguishes the first two asymmetries is that they are predicated on being able to ignore the specificities of the customer’s context-of-use, precisely because what is being looked for is a symmetry between the supply-side offering and the demand being attended to on the demand-side – in the case of the first two asymmetries, the supply-side economics require the presumption that demand must be symmetrical. But with the third asymmetry this is no longer the case precisely because the economics have changed as a result of the ability of the technology to sustain power-at-the-edge cost-effectively.
Working with the three asymmetries means that we always have to take into account the impact the ‘I’ of the observer on the way the relationship to the customer’s situation is defined. But with the first two asymmetries, what is particular about the customer’s relation to his or her context-of-use can be aggregated out, so that the ‘I’ can be asserted unilaterally. But with the third asymmetry, it cannot, so that the ‘I’ has to be established collaboratively. This is why it is so difficult to take power to the edge of the organization – it questions the authorization of the ‘I’ of the beholder.