This Page

has been moved to new address

Paul Ormerod

Sorry for inconvenience...

Redirection provided by Blogger to WordPress Migration Service
Paul Ormerod: June 2012

Thursday, 28 June 2012

Royal Bank of Scotland fiasco shows the power of networks


The last week or so has seen complete mayhem in the Royal Bank of Scotland and its subsidiaries.  A computer glitch has caused their payments systems to collapse.  Monies have not been processed, 17 million customers have been unable to access their accounts and pay their bills.

The impact for RBS has been catastrophic.   So, an incident of this magnitude must surely have been caused by a massive event?  Perhaps the building containing the Bank’s main computers was burned to the ground?  Or the system was the victim of a malevolent cyber attack by a hostile power?  In fact, nothing like this took place at all.  It seems that an inexperienced operative in India accidently wiped information during a routine software upgrade.

In other words, a relatively trivial problem cascaded across the entire network and ‘went global’. 
This is not an issue which is specific to the RBS.  It is a fundamental feature of any system in which networks are important.  The classic example is outages in electricity supply systems, leading to huge blackouts.  Sometimes, there is indeed a major event which causes a major failure, such as a hurricane or ice storm destroying physical links in the system.  But all too often, it is a trivial failure which leads to a cascade across the system.

Most of the time, of course, the impact of small events is confined to their immediate locality and spread no further.  But it is the connected nature of networked systems which means that, in principle, even small events can have consequences on a scale up to and including the network as a whole.  The probability of any single small event causing a dramatic incident is very, very small.  But trivial problems occur on an almost daily basis in almost all systems.  So at any time, there is the potential for catastrophic failure.

In the scientific literature on the fundamental mathematical properties of networks, there is a jargon to describe this inherent property of networked systems.  They are ‘robust yet fragile’, a phrase initially coined by the top Caltech scientist John Doyle, way back (!) in the 1990s.  They are ‘robust’ in that small shocks, small problems, do not usually spread very far in the system.  But at the same time they are ‘fragile’.  A tiny adverse event can in principle bring the whole system down.

We see this principle very clearly in financial markets.  Think back to the banking credit crisis of the late summer of 2007, the harbinger of the major crash just over a year later.  At the end of June in 2007, there were few problems.  Voices were being raised about the problems of debt, but these were still very much in a minority.  The anxieties had not percolated across the network of banks, and their confidence in lending to each other.  Suddenly, this changed, and we had a major liquidity crisis.  Inter-bank lending collapsed, leading, very quickly to the demise of Northern Rock.  Not much had happened.  But negative sentiment suddenly cascaded across the banking network.

Companies must take these fundamental features of networks into account.  The potential problem extends far wider than financial markets.  Adverse comments, often with no basis in reality, about a firm and its products are posted all the time on the internet.  Most of the time, these do not get very far, often no further than the green-ink perpetrator of the comments.  But, very occasionally, a grievance, even one which is completely ill-founded, will get global traction and seriously damage a brand or even a whole company’s reputation.

One of the real cutting edge areas of scientific investigation on networks is how to spot at a very early stage when a comment has the potential to go global.  So defensive strategies are possible, firms are not powerless in our highly connected world.  But it is crucial that both firms and governments start learning the lessons of the networked world of the 21st century.

Friday, 1 June 2012

Kahneman and schizophrenia in economics


I was at a fascinating session last night, with Nobel Laureate Daniel Kahneman in conversation with a leading thinker from the advertising world, Rory Sutherland of Ogilvy and Mather.  Kahneman was talking about his book Thinking Fast and Slow, a summary of his life’s work.

I am a great admirer of Kahneman.  Trained as a psychologist, along with his co-Laureate Vernon Smith, he more or less created experimental and applied behavioural economics.  He had the extraordinary idea (!) that instead of theorising a priori about how ‘rational’ people ought to behave, we should observe how people really do behave.

His work shows that, in general, people do not behave as the model of Rational Economic Person says they should.  His Nobel lecture is very accessible, written in English, and is available at http://www.nobelprize.org/nobel_prizes/economics/laureates/2002/kahneman-lecture.html.    He concludes that ‘people reason poorly and act intuitively’.

Yet despite his scientific standing, economic theory has so far made very little use of his results.  Theoretical journals are still replete with articles full of calculus, in which agents (economist-speak for ‘people’) are reasoning very well, and taking the ‘optimal’ decision.

So there is a schizophrenia in the profession of economics.  Nobel prizes are awarded to people whose work shows empirically that in general people do not optimise.  Theoretical work carries on in the same old way, assuming that they do.

Why is this?  Perhaps Kahneman’s own work gives us an insight.  He distinguishes between System 1 and System 2 thinking.  System 1 is when the brain is almost on autopilot.  He illustrated this in his talk last night.  ‘If I mention the word “vomit”, your brain reacts.  If I ask “what is 2 plus 2?”, the answer comes in your mind automatically’.  System 2 thinking requires much more effort – most people, he said, cannot multiply 24 and 17 whilst at the same time negotiating a right turn in heavy traffic.

Actually, I guess that most economists could do this.  Many of them could even carry out the maths required to optimise a particular function at the same time!  In other words, economists are so steeped in calculus, they have performed these mathematical operations so many time, that for them, the maths of calculus has become System 1 thinking.

So when economists approach a problem it has become second nature to write down some functions and to maximise (or minimise) them.  It is as instinctive as adding 2 and 2 is for more normal people.
But Kahneman’s empirical insights require hard System 2 thinking.  You are trying to understand a particular problem.  Well, exactly how do agents behave in this situation?  What rules are they following, how do we translate them into maths, can we solve the resulting equations or do we need numerical solutions?

In short, it is much harder to do Kahneman-inspired theory than it is to maximise a utility function.  In economic theory, System 1 thinking rules!