ad astra per alia porci

Super Crunchers: Why Thinking-By-Numbers is the New Way to be Smart
December 21, 2007, 5:25 am
Filed under: investments/finance/economics, Law

A book written by a law and economics guru, Professor Ian Ayres of the Yale Law School. The central thesis of the book is that there is a growing trend for the common people, academics and businesses to utilise statistical predictive models in making decisions, instead of relying on pure intuition. This trend is partly due to the inadequacies and inaccuracies of utilising intuition and relying on “experts” and the development of powerful computers that can crunch numbers quickly. Concomitantly, statistical methods seem to be much better predictors than the naked human mind.

American law schools are the biggest proponents of interdisciplinary approaches to the study and advancement of the law and law and economics is probably the hottest of the palette of mixtures available. This approach is just starting to take root in Singapore with the offering of double-degree programmes in law and business, law and life sciences and of course, law and economics of which yours truly is in one of the pioneer batches, amongst other combinations. Reading this book gave me the impression that American law professors are much more broader in their thinking and they have a wider intellectual range that encompasses the ability to blend issues from other disciplines into the law and allows them to write outside of the narrow legal academic circumference.

I am of course a biased observer but I do believe that an interdisciplinary approach is the right way to go. Currently my impression is that the law is unhealthily focused on language, interpretation and argument; the rigour of applying empirical techniques that once belonged to the domain of science and the “harder” social sciences would serve as a complement. It would be great if courts accept economic arguments and statistical analysis when deciding cases that primarily turn on “intuitive” issues like public policy and morality. The law exist to govern human actions and promote the social good; adopting methods that aid decision-making helps the law achieve its aims.

The application of mathematical methods will serve to remove much of the vagueness and uncertainty of intuition and allow the courts to come to more definite answers. For example, in determining compensation when a victim of a car accident sues in tort, cash flow analysis that takes into account inflation and growth in earnings capacity can lead to a fairer prediction of the amount that the victim is allowed to recover compared to a purely intuitive, arbitrary number determined by the judge.

As suggested in the book, regression analysis can be used to test the effect of laws that are implemented. For example, the current big issue here pertains to Article 377A of the Penal Code and whether it is archaic and should be abolished. Any Tom, Dick and Harry can say whatever they want about the arguments for and against the abolition of 377A, but to me, the argument ends there. The argument that abolishing 377A will lead to more homosexuals in society to me is as appealing on a logical level as the argument that abolishing 377A will lead to more equal rights for all members of society. It’s really anyone’s game and anyone can make a perfectly acceptable argument against and for the abolition without any real proof of whether one side is better than the other.

Applying empirical statistical methods to the situation will help decisions. It provides a means of testing the validity of a statement. Of course, it sounds Machiavellian and crazy but one way to test if abolishing 377A will indeed lead to more gays in society is to abolish it, collect data subsequently and apply regression analysis to see how this one factor affects the number of gays. Alternatively, bearing in mind that social situations differ between different countries and cultures, one can utilise data from other countries which have done the same and do an analysis of it.

The point here is that crunching numbers allows decision makers to weigh options and test statements. It does not totally remove the role of intuition and choice and reduce the decision-making process to a purely mechanical one, but it does lend a degree of objectivity that can weed out and expose opinions made solely on emotive grounds. Rather than merely speculate in one’s mind the impact of a decision of a court of law in ruling in a particular way, more weight can be given to options that are proven statistically to work better.

Moving on to less dreary stuff, here are some interesting ideas and trends that I have picked out from the book:

1) The rise of data crunchers – Data gets more expensive

Companies that aggregate raw data and help businesses crunch data will become more numerous and profitable, with the change in the way businesses make decisions. Data will become more valuable as consumers and businesses alike start to recognise the usefulness of crunching numbers and pay good money for it.

2) Economic efficiency

I am a bit iffy about whether crunching numbers can lead to overall social benefit and economic efficiencty. The jury remains out on this issue. Consumers seem to be exploited by companies that utilise these methods to maximise profits. One example stated in the book that of a airline company that calculated the “pain level” of service that will lead to a customer switching to other airlines; once the optimal level is arrived at, the company can just pitch their level of service to that particular standard and keep the maximum number of passengers without being required to continually improve services, which is desirable from a consumer point of view.

Conversely, the argument that consumers themselves can utilise these techniques to protect themselves against businesses is not as strong because consumers are less equipped intellectually and resource-wise to crunch numbers and predict prices to aid themselves. Even though the book provides the examples of how some organisations and websites have gathered information and crunched data on the best time to say, book flights for the lowest rates, the problem remains as these entities remain organisations and not individuals. One cannot expect an individual consumer to spend the time and effort to crunch their own numbers to avoid being duped at the cash counter.

3) The distinction between forward and backward crunching

Backward crunching is when historical data not purposely gathered and created by the user is statistically analysed. This analysis is much more difficult given the lack of control groups and the myriad of factors that can affect variables.

Forward crunching is when the user actively engages in “experiments” with control groups and surveys and gather data for analysis. This is what happens with Google when they send out random ads to test which arrangement of words lead to the greatest clicks. Information is gathered in a structured way and the impact of changed variables is analysed through the data.


Leave a Comment so far
Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: