An essay of mine is included in the upcoming thought leadership book Connecting Minds, Creating the Future – along with work by Joseph Stiglitz, Hans Rosling and Bill Gates – which is being published by Trampoline. The publication date is yet to be confirmed, but will post it when I know.
This is the Newsweek cover story for the edition dated May 7th.
Boris Johnson is late. Five minutes after the scheduled start of Mayor’s question time—a quasi-monthly opportunity for members of the London Assembly to demand a public accounting from the city’s top elected official—he arrives at last, wearing a backpack over his raincoat and carrying a large takeout coffee, his schoolboy thatch of platinum-blond hair even more disheveled than usual. He stuffs the backpack beneath his desk, casually tosses down a crumpled copy of the agenda, and removes his coat to reveal the traditional garb of the British ruling class: a navy-blue suit. “We’ll take item two while the mayor composes himself,” the chairperson, Jennette Arnold, says dryly.
The assembly members are seated in a horseshoe with Johnson at the open end, a lone figure in an expanse of purple carpet, his back to a big window overlooking the gray expanse of the Thames. As the Conservative mayor begins delivering his report, Labour members of the assembly try to shout him down, and the session soon degenerates. Arnold bangs her gavel. “I will not have this question time turned into a campaign,” she chides.
Like it or not, however, that’s exactly what the tumultuous meeting is—and it’s only a warmup. On May 3, Johnson is facing off in a rematch against one of Labour’s wiliest campaigners and most ruthless operators. The victor will run Europe’s largest and most diverse city for the next four years. In their last contest, four years ago, Johnson defeated the then-two-term mayor, Ken Livingstone. No politician in the city is more entrenched in London politics or more skillful at street fighting than Livingstone, a man who earned the nickname “Red Ken” in his titanic struggles against Margaret Thatcher in the ’80s, when she was prime minister and he led the now defunct Greater London Council. During the 2008 race, Livingstone called Johnson “the most formidable opponent I will face in my political career.” In the run-up to this week’s vote, opinion polls seesawed, but Johnson seemed to be pulling ahead in the home stretch.
The cognitive and motivational processes that influenced your impulse purchase of a double-chocolate muffin at the coffee shop this morning may not be as straightforward as once thought. Traditional economic theory would say that the purchase was made because of the “utility” of the muffin, meaning that, at that moment you made the decision, it was based on a set of judgments derived from the way you allocate your resources across a range of goods and services in order to gain satisfaction.
One of the limitations of economics is that it is abstract. Its principles are a series of hypotheses that stress efficiency and idealised versions of human behaviour: it emphasises what people ought to do in order to achieve optimal outcomes. Economics is rooted in the ideal, not the human. The problem is that people don’t always do what’s most efficient. Psychological observation has offered us insights into what people actually do. But psychology and economics don’t paint the whole picture: economic theory works well when describing large groups, and poorly when analysing individuals. Psychology can give valuable insights into individual behaviour, acting as a good descriptive tool, but it lacks coherence when it comes to theory.
Psychologists might point out that they were conducting tests long before economists, but neither discipline possesses pure science. The theory of utility might explain why you buy a muffin, but it stumbles when explaining why you gambled your inheritance or volunteered at a homeless shelter. Choice behaviour based on cost-benefit analysis leaves little room for ambiguity.
However, new ways of examining economic behaviour are being forged by a growing number of interdisciplinary research teams from the fields of neurobiology, experimental psychology, mathematics and social science. They are developing an approach based on biologically plausible models, using tools such as neural imaging, and analyses of how chemicals in the brain or neurons operate, rather than grand theory.
Using data gleaned from controlled economic experiments, departments of neuroeconomics — many barely five years old — are attempting to fill the gaps in our knowledge between what people should do, and what they actually do. The aim: to create an algorithmic model of human behaviour through a synthesis that blurs the border between the social and natural sciences.
The Center for Economics and Neuroscience in Bonn uses neuroimaging to investigate the neural basis for social and economic decision-making. It claims to be the first lab in Europe to scan the brains of two people taking part in an experiment simultaneously. Each subject was given a series of problems to solve and was rewarded with money for correct answers. However, one was given more money than the other. The subjects’ brainwaves were recorded as they solved the problems and as they received their unequal rewards.
Scientists analysed data from the ventral striatum, the part of the brain most activated when a subject receives rewards. They found that the strength of the activation doesn’t just depend on what an individual receives — it’s made stronger or weaker depending on what others receive. The activation in the subject of the test who received a reward, but was aware that the other subject was getting more, was weaker. This suggests that, for the subject who was receiving less, comparison with what the other subject gained was more important than the sum received. The other insight: the more active the reward centre in the brain, the less rational thinking seems to play a role, which helps to explain fluctuations in financial markets and — perhaps — why you decided to buy a chocolate muffin as well as a raspberry one.
First published in the May issue of WIRED.
A transcript of a talk I gave at the Institute of Practitioners in Advertising on Social Innovation in mobile, 28/3/2012
Forgive me for stating the blindingly obvious, but it’s clear that, technology is converging in mobile devices. The first quarter of 2011 saw a momentous shift: for the first time, sales of PCs were overtaken by sales of mobile devices such as smartphones and tablets.
And, while we don’t know exactly the form our devices are going to take in the next few years, there is a very clear trend towards what many of the speakers today have talked about – a connected device that travels with us wherever we are that is constantly producing vast amounts of data about who we are and what we do. A device that’s our primary interface with the world. Each of our devices – our tablets, phones, gaming consoles, devices in our cars, the sensors that will be an increasing part of our lives as the internet of things develops – are generating a vast stream of data, in fact, far more than we can use.
Wal-Mart, the biggest retailer in the US, handles more than 1 million customer transactions an hour, feeding databases estimated at 2.5 petabytes: 167 times the number of books in the Library of Congress, and there are many other examples of this data overload. Businesses now view data this data we’re all generating as a raw material – an economic input on a par with capital and labour. Data has been described as “the new oil”.
Large value is being unlocked. Insights that were previously unknowable are coming to light. We’re discovering new things, building new businesses, realizing new opportunities. And another benefit is what I’d like to talk about today: we can help people live better lives.
Time is short, so I’m going to focus solely on three initiatives in social innovation in the developing world, each of which shows demonstrates an original and successful approach to tackling very real problems that people face in three areas:
Health – counterfeit drugs in West Africa and beyond.
Displacement – the global refugees crisis.
Information – what happens when you can’t rely – or trust – your government?
All three are using mobile in dynamic and original ways.
One morning in December 2011 Mikko Hypponen, the chief security research officer at F-Secure, an anti-virus software company, scrolls down the screen of his laptop examining the latest of the 200,000 malware files to arrive in his office every day in Helsinki. As he does so, the data shifts downwards, the most recent files at the top. “This sample, PA3control.exe, arrived eight minutes ago,” he says. “It’s infected with a virus we’re aware of, which means we don’t have to do anything. We already know what it is.”
Hypponen is typing hard in short bursts. He is dressed in black except for a mustard-coloured shirt. His hair is pulled into a blond ponytail and he wears small, round spectacles. “Let’s look at this file deeper,” Hypponen says, clicking on another data point. “So what do we know? First of all, it’s very small. It’s two kilobytes, which is suspicious. We look at the file type… It’s a Windows executable.” An executable is a file in a format that the computer can accomplish itself, intended not to be read by humans. “When did we get it? Where did we get it? What’s the file hash? [A hash is a number calculated from the contents of a file, uniquely identifying that file from any other files.] How many times have we received it? What do we know about its structure? How many of our users have executed this file in the past week, or ever?”
Hypponen pulls up another window. “What I’m looking at is the report for this file,” he says. “It takes two or three minutes to generate, but I’m thinking that this file might not do anything interesting because it’s too small. What is more likely is that it’s corrupted. It looks suspicious because the header, the part of an executable that reveals what’s in the file, is missing, so it doesn’t look right. It’s likely that it will just crash.”
Hypponen is right: when the file is executed, it doesn’t even run. He looks down the list of data points before him. Some are in red, meaning that they haven’t been analysed yet. Further down the screen, the digits turn green: they have been processed.
As at every security company across the world, the Malware Sample Management System (MSMS) at F-Secure demonstrates a steady, and growing, onslaught of toxic binary fizzing through the internet, looking for vulnerabilities. Today, criminals are producing malware on an industrial scale. Security company McAfee identified six million unique malware samples in the second quarter of 2011 alone. And each sample means countless files containing the original virus loose online. Malware –Trojans, worms, spyware, backdoors, fake antivirus software, rootkits and others — is developed and sold to third parties, who will alter the source code for their own illegal purposes.
Hypponen checks another data point: his team claim to have logged 46,655 unique pieces of malware in just the past 24 hours.
Behavioural economists are always quick to tell us that human beings are extremely poor at assessing risk: we underestimate likely dangers and overestimate unlikely ones. Automotive accidents are relatively common, but we’ll risk taking an important call while we’re driving. Shark attacks are extremely rare, but thoughts of an encounter with a great white cross our minds when we step through the surf.
A single British person has died from a rat bite in the past four years, whereas the lifetime risk of dying in a road accident for UK citizens is 1 in 240; but 42 per cent of 25- to 34-year-olds admit to having a phobia of rats. Fear of driving is comparatively small because we are willing to take risks if we think that we can control the outcome. We associate driving with being in control of the vehicle, whereas an encounter with a rat seems relatively unpredictable. After the terrorist attacks of 9/11 1.4 million Americans changed their travel plans for Thanksgiving and Christmas despite the fact that it was far, far more likely that something bad would happen to them if they travelled by car than by airplane.
The evaluation of risk is a mathematical computation, meaning that we should be able to make use of data to make better decisions. But the truth is that, for individuals, risk and emotion are inseparable. We think that we’re being rational even when our feelings are affecting our judgment. But does it work the same way for groups of people? Do large corporations and markets find it easier to assess risk by pooling data and using facts to drive decision-making?
Credit ratings agencies are supposedly entirely data-driven businesses; their area of expertise is to assess risk within financial marketplaces by determining the quality of debt obligations such as securities. Consequently, the agencies hold enormous sway over all financial instruments, including sovereign debt, by determining how that debt should be graded. Using a sliding scale from AAA to CCC the agency comes to its decisions based on how likely a borrower is to default on a loan. A rating of BBB or higher means ‘investment grade’; anything below this is regarded as ‘junk’.
In the US, the big three – S&P, Moody’s Investors Service and Fitch Ratings – have been anointed by the government; if you want to participate in the market, SEC and Federal Reserve regulation means that you have no choice but to work with them.
Despite the trust invested in them by the government, these agencies failed to spot the financial crisis of 2008, continuing to rate toxic mortgage-backed securities AAA until the crisis engulfed the global economy. Lehman Brothers, Washington Mutual and AIG were all rated ‘investment grade’ until September 15th 2008, the day Lehman filed for bankruptcy.
Part of the problem was that, in good times, no one wants to be seen as the person putting the pin in the balloon: bubbles tend to be driven by a kind of euphoria; a sense that this time things are going to be different. And agencies bill the issuers of the structured financial products, not the investors, meaning that they have a relationship with the investment banks: the banks are their clients and, consequently, the source of their income. If a bank wants an AAA rating for a tranche of mortgage-backed securities it’s just issued, it’s in a ratings agency’s interest to give the client what it wants. And, during the boom, there was a third dilemma: the financial instruments the agencies were rating were so complex and so new that the agencies simply didn’t have the information needed to make the right decisions – they were, effectively, relying on other ways of making decisions other than data.
Post 2008 with, effectively, a monopoly and Apple-sized margins, what incentive do the agencies have to reform? Congress has encouraged them to come up with some ideas, among them reviewing their analytical models, more complete disclosure on a security’s collateral and better trained analysts. Ideas such as introducing competition by having more agencies and changing the way they are paid – for instance, by investors in debt rather than its issuers – don’t seem to be on the table.
The agencies’ defence is that they are just like anyone else; they are simply offering opinions, which are constitutionally protected as free speech. Effectively they’re arguing that their judgments are made in good faith but cannot be relied upon; which begs the question why, if they’re not to be relied upon, the government has offered them a monopoly on unreliability? Surely, the guy at The Red Lion would be just as happy to receive hefty fees for equally undependable opinions on securitization?
Ongoing litigation in California is challenging that. When the housing market crashed, California’s main public pension system (CalPERS) lost an estimated $10 billion. Its administrators are involved in litigation with two of the ratings agencies to try and secure compensation for their awarding the highest ratings to three financial products CalPERS invested in. The products were subsequently discovered to consist largely of high-risk sub prime mortgages. The agencies in question, Moody’s and S&P, have argued that the action is an unjustified assault on free speech.
In January a judge in San Francisco ruled that, while he agreed that the agencies are constitutionally protected by free speech, the lawsuit could continue because CalPERS had produced evidence that could potentially establish liability, meaning that both agencies allegedly made factually dubious claims about the investment ratings.
In other words the ratings agencies underestimated likely dangers (the securities they were recommending contained toxic crap) and overestimated the unlikely ones (if we don’t give this client a AAA rating we’ll lose market share). It turns out that even those entrusted with using data to offer copper-bottomed judgment were sticking their fingers in the air – just like the guy at the Red Lion.
This is a slightly longer version of a story that appears in April’s WIRED.
Rick Santorum’s recent victories in Colorado, Minnesota and Missouri in the Republican presidential primaries have demonstrated that there are many GOP activists who are unwilling to do the bidding of the party establishment and get behind Mitt Romney.
Nevada was the first state in which he polled more than 50 per cent of the votes (50.1 per cent to be exact) in a single primary and, before Florida, he had received little over 30 per cent of the total popular vote. In Minnesota and Missouri he failed to win a single county and in Colorado he received 34.9 per cent of the vote to Santorum’s 40.3 per cent.
Throughout the campaign many activists have made their position clear: they would prefer a cultural warrior like Gingrich, a pious bible basher like Santorum, a candidate who believes the US should abandon taxation and foreign policy like Ron Paul, a fatuous Fox News populist like Michelle Bachmann, or someone who believes that overseeing the country with the largest GDP in the world is just like running a pizza franchise, like Herman Cain.
The poverty of the Republican candidates for 2012 has been matched by the low voter turnouts that reflect the disappointment felt by many activists that they don’t have a candidate who represents them. This is not uncommon in democracies. Many countries in the western world suffer voter apathy. For many people, even those engaged with current events, politics is rarely something about which they feel positive emotions. We vote in elections because we feel that we should. More often, we vote for candidates simply because they’re not the others – a University of Michigan study last year suggested that anger was the prime motivator for active voters.
A large percentage of people are not habitual or periodic participants in democracy: a report in 2010 revealed that 56 per cent of young people were not even registered to vote. The number of people voting in general elections in most modern democracies – 65 per cent voted in the 2010 general election in the UK – suggests that there is widespread dissatisfaction with the candidates: a malaise has spread among the electorate who believe, to use the line from the seventies, that whoever you vote for the government always wins.
In the UK, the Labour party has a similar problem to the Republicans: there are few mainstream Labour voters who really believe that Ed Miliband is likely to lead the party to a sweeping populist victory at the next election. At very best, it’s hoped that the Conservatives will be so unpopular that Labour might be able to sneak a small majority. Miliband won the leadership election largely on the basis that he wasn’t his brother, who was seen by rank-and-file members as too Blairite, too middle of the road, and not representative of the vision shared by Labour activists. In this case, the rank-and-file got what it wanted, and it looks like it got the wrong guy: according to the most recent YouGov poll, Miliband is significantly less popular than Neil Kinnock was as this stage of being opposition leader.
The number of people who get to vote in party leadership elections or nominate Presidential candidates is very small because the number of political activists is relatively small. In the final round of voting in 2009 Miliband won the leadership with 175,519 votes to his brother’s 147,220 votes. However, he won only one of the three major voting blocks, meaning that he was the first Labour leader since the electoral college system was implemented in 1980, to be elected without the approval of the majority of Labour members.
By allowing candidates to be chosen by those who feel most passionately about a political movement, parties don’t always end up with the best candidates. What they get is the candidate who appeals to those who are most ardent and purist in their feelings about the movement – a tyranny of the minority. This flies against a lot of what we have learned from the development of networks in the past twenty years – while technological innovation is focused on the crowd, the social and sharing, political decision-making is still the province of the self-selecting few. The system works in favour of the parochial and the insular, which, of course, doesn’t play to the electorate, who exist outside the party bubble.
Candidates who energize the base of the party sometimes manage to reach into the mainstream – for instance, Margaret Thatcher in 1979 and Barack Obama in 2008. But these examples are the exceptions rather than the rule. Finding the candidate who can play well both to the minority (those who have a stake in the ideology of the party), and the majority (many of whom have fluctuating political allegiances) is an imprecise art.
Ironically, Obama would likely prefer to face one of the other candidates that so enamour the Republican base – the prospect of President Santorum would motivate the Democratic base and terrify all but evangelical Christians. In all likelihood, however, he’ll face Romney, the GOP candidate with the most realistic chance of beating the incumbent – whether the grassroots Republicans want him or not.
A year on from the wave of uprisings in the Middle East and North Africa (MENA) numerous pieces of analysis have been published about the causes of the social unrest. The consensus is that demographics play a large part: the region is undergoing a “youth bulge” – one in five people living in MENA are between 15 and 24 and many of them expressed little hope of raising their standard of living. The population’s relative youth also helps to explain the role played by mobile devices and social media in the organisation of protests and demonstrations. Then there was the matter of political succession; some MENA countries have fallen into a form of dynastic republicanism, a perplexing hybrid of a monarchy and a presidency that’s passed from father to son.
But was this all? In the two years preceding the social unrest in North Africa the U.S. government’s response to the global financial crisis was for the Federal Reserve to print billions of dollars. The knock-on effect of this fiscal strategy was to cause asset and commodity prices to rise, causing instability elsewhere in the world. Asset price inflation meant that millions of people found it harder to feed their families. Mohamed Bouazizi, the street vendor whose self-immolation proved the catalyst for the protests, came from a village which, according to the New York Times, suffered 30 per cent unemployment. Bouazizi could find no other work and supported several family members with earnings of $140 per month made by selling fruit on the roadside.
Following the popular uprising in Tunisia, civil unrest spread across the region. During this time – in fact, between the ousting of President Ben Ali of Tunisia and President Hosni Mubarak of Egypt – researchers at Barings bank released an interesting data set. The numbers revealed how the rising prices of commodities had caused the price of bread to rise in many MENA countries and compared this data with that of European territories that experienced significant social unrest in the mid nineteenth century.
The data from the nineteenth century comes from economists Helge Berger and Mark Spoerer who calculated the scale of bread price inflation in some of the key locations of the 1848 revolutions that spread from France through the Italian and German states, Denmark, Hungary and parts of the Habsburg empire. The blue diamonds towards the bottom left of the graph show countries such as Sweden, Russia and England where wheat prices were comparatively stable – and there were no revolutions. The mapping demonstrates that the more dramatic the increase in the price of wheat, the more likely it was that revolution occured.
Fast forward 163 years and the data from the Arab spring has been mapped on top of the data from 1848. The blue crosses show the countries that make up MENA. While there’s no clear correlation, the data suggests that countries where prices were relatively stable were much less likely to have undergone social unrest.
Note to tyrants: maintaining a supply of staple commodities is a superior survival strategy to taking down the internet.
This short animation from graphic artist Fraser Davidson uses a section from Bill Maher’s recently released audiobook The New New Rules to illustrate how the NFL redistributes wealth from successful franchises to less successful ones. A sport that has a governing body which splits revenue equally among the 32 franchises is America’s most successful television spectacle. Around 100 million people will watch the Superbowl live on February 5th this year – that’s 40 million more than go to church on Christmas day.
Liverpool’s handling of the Luis Suarez affair is a case study in poor reputation management. It’s hard to imagine how a global brand with significant resources was able to make such a mess of the incident. That is, until you remember that football is a sport that, so often, provides great examples of poor decision-making and cack-handed management. Clubs are bought and sold, managers fired and salaries agreed often on the basis of little more than emotion. The Premier League’s two richest clubs – Manchester City and Chelsea – are run not as business concerns but as a means of amusing their owners. Why? At its root, football is about how people feel and the value they place on that emotion.
Manager Kenny Dalglish’s conduct is reminiscent of former BP Chairman Tony Hayward, whose prickly, suspicious behaviour following to the Deepwater Horizon disaster in the Gulf of Mexico provoked public outrage and condemnation from the highest levels of government. Like Hayward, Dalglish thought not about how the scandal played in the wider world, but how it was playing to his constituency.
Would, say, a large corporation have ordered its employees to don T-shirts in a gesture of support for an employee who had committed an act of gross misconduct? Anyone accused of such an offence in the vast majority of workplaces would have found themselves suspended until the results of the investigation were announced. Would any serious business, after its employee has been found guilty of using racist language, fail to condemn him? Liverpool’s response to the verdict was to aid Suarez in a mealy-mouthed response that fell way short of an apology. Suarez’s words read like a transcript from the film Borat.
For Liverpool supporters to be holding up shirts in support of a player shows that fans would rather believe that Suarez has been the victim of a miscarriage of justice perpetrated by a corrupt governing body in cahoots with an ancient enemy, Manchester United, than accept the verdict of an exhaustive independent investigation. Their claims are, of course, preposterous, and show that some football fans act like members of a cult rather than sports fans. And cults thrive not on reason and argument, but on fervour and passion. This, of course, is great for a brand – passion for and engagement with a product are incredibly hard to generate and can’t be faked. But, this kind of intensity needs careful management if it’s not to become destructive.
The entire situation could easily have been nipped in the bud weeks ago if, after the Manchester United game, Suarez had used his ‘I’m from South America – I didn’t realize I was using racist language’ defence, called Patrice Evra to apologize and made a donation to an anti-racism charity. Job done – and Liverpool would have had a player available for the games he’s been banned for, saved thousands in lawyers fees and not made themselves appear to be more interested in tribal loyalty than addressing the serious and offensive behaviour of one of the club’s employees.
The top Premier League clubs are well aware that the opportunities for future revenue growth come from outside Europe. And the new markets don’t operate like fanbases of the old days, which were largely territorial or handed down from father to son. Fans in Asia, Africa and North America don’t care about the tribe, they care about the brand and what it says about them; which is why Dalglish’s siege mentality illustrates the worst kind of short-sighted, parochial management.