Getting to Why

Alex Rutherford
5 min readJun 20, 2017

I recall a sudden sense of enlightenment tinged with futureshock when I first asked my teller for an explanation of why I was not approved for a credit card (back when I first arrived in the U.S. without a credit score). The answer was at once unsatisfactory and yet vaguely familiar from data mining classes.

Well, it seems to help if you have a couple of different credit cards… that helps.. Also the time you have had your account maybe…

This was a firsthand example of a person making sense of a ‘black box’ algorithm (one under which a decision is made by a machine with little insight into how or why it was made).

This gets to the heart of a hot topic, that of auditability of algorithms. In other words the ability to check on the decision making process of algorithms, principally to avoid discrimination whether purposeful or accidental. This is relevant as algorithms increase in complexity from simple rule based models such as Decision Trees to abstract models in higher dimensions such as Support Vector Machines and Convolutional Neural Networks. The trade-off here invariably reduces to increased accuracy for increasing difficulty in explaining why.

The diagram below breaks down a Decision Tree for the classic iris dataset. The attributes in each case include petal length and sepal length, and at each branch a rule of the form ‘If attribute X is greater than value Y then…’.

Visualisation of a decision tree: from scikit-learn documentation

While the Decision Tree example does well to surface the decision making process in a way that is interpretable, classification algorithms quickly become murky. Ensemble methods combine potentially hundreds of patchy learners to compensate for one another’s weaknesses. Kernel techniques apply non-linear transformations, projecting data to higher dimensional spaces. Neural networks massage data through hundreds of interconnected layers. In the algorithmic arms race, interpretability invariably falls by the way.

If ‘an algorithm’ in raw form is simply a set of numbers (or weights) to be applied to a set of inputs, then presenting algorithmic regulators with a set of binary learned models would neither be helpful or feasible. So how can regulators meaningfully look inside and inspect an algorithm?

Human Testing

Many metrics exist to assess the performance of algorithms, but we should always return to the human experience of these complex beasts: the extent to which their application to a single example is in line with our expected behaviour. In other words, rather than, in some sense, trying to “take a copy of the algorithm”, it should be straightforward to submit an example to the algorithm to see the result it produces. Thus, by iteratively and manually submitting examples, we may build up an intuition of how the algorithm makes its decisions.

Admirably, many organisations leading the way in machine learning advances release open-source software libraries such as TensorFlow, Keras and Caffe. These also include valuable pre-trained models for canonical tasks such as object recognition from images (given the huge amount of training data, domain knowledge and computing resources needed to converge on a stable model, this is invaluable).

In these cases, an invested and CS literate user could download these models and test the performance against any input image. In this case the barrier to entry is the ability to use a Python API and a powerful laptop. The Caffe documentation walks through how to capitalise on the CaffeNet learned model of image recognition to recognise a cat in about 20 lines of code.

Caffe documentation

The barrier to entry could be further lowered by defining a set of common exemplars. These benchmarks would be submitted to the algorithms and the results returned and inspected. Perhaps facial recognition algorithms could be tested against a set of faces of people of widely differing ethnic extractions with challenging mis-balances. In this way a non-technical person in a regulatory or supervisory role could qualitatively define metrics that reward particular behaviours that are considered fair.

Proprietary Algorithms for Proprietary Platforms

Unfortunately object recognition in images is the simplest case study from the perspective of data ownership. There exist petabytes of images that are freely available for varied uses and benchmarking. How can we arrive at a meaningful benchmarking for a more contentious problem such as predictions of loan default, recommendations of content or ad targeting? In these cases, the input data are very closely tied to a particular platform or service and are rightly tightly controlled due to their sensitivity. Without access to the corresponding data and inner workings of the platform, how can we meaningfully audit these algorithms?

The General Data Protection Regulation calls for algorithms to explain their decisions. However, knowing that signal X is most important, without having legitimate access to what signal X is renders that transparency useless. That signal might be related to browsing activity over the last 6 months for example.

Synthetic data offers a way forward here. A fake user that we can define and tweak, creating a synthetic profile and history would provide a practical means to interrogate proprietary algorithms designed for proprietary data while protecting both. Controlled access to approved users and rate limits would protect precious algorithms from being reverse-engineered from systematic queries by adversaries.

Human-Centred Metrics — A Final Thought

Statistical models are commonly assessed by metrics that balance their accuracy with their complexity; more specifically the goodness of their fit and the number of free parameters. In other words: the more explanatory power with the less dials that need to be turned the better. Can we imagine a more rounded human-centred metric of performance that rewards limited bewilderment when probing and testing that model?

Footnote

There are several pop culture references here that the dilligent reader is rewarded for recognising.

--

--

Alex Rutherford

Data, science, data science and trace amounts of the Middle East and the UN