Bible oaths retained in UK courts

Tags

, ,

In UK courts both witnesses and defendants have to swear or affirm to tell the truth.  The oath is ‘I swear by Almighty God [to tell] the truth, the whole truth, and nothing but the truth.’ Christians say this whilst holding the bible, and those those of other religions can say the oath over their own holy books.  Atheists on the other can can ‘solemnly, sincerely and truly affirm‘ to tell the truth rather than swearing on a religious text.

The Magistrates’ Association (the body which represents three-quarters of the 23,000 magistrates in England and Wales) recently debated whether to abandon the holy books and references to god.  As the BBC reports:

The plan was put forward by a Bristol magistrate, Ian Abrahams, who claims many people are no more likely to tell the truth after using it to swear an oath.  He believes what is needed is a greater sense of how seriously lying in court is treated. Mr Abrahams’ alternative oath would include an acknowledgement of the duty to tell the truth. “I understand that if I fail to do so, I will be committing an offence for which I will be punished and may be sent to prison.

Abrahams claimed that people are no more likely to tell the truth under oath than otherwise, whilst religious leaders have argued that the oath has meaning for the religious.  The change was, however, rejected by the Magistrate’s Association, meaning that people will continue to swear on the bible and other religious texts.

This comes after The Girl Guides stripped all religious references from their promise and The Scouts introducing a secular pledge whilst also retaining the religious version. This got me thinking whether religious oaths should be used in court or not.  Given that a secular affirmation is available as well as the religious oaths, the religious oath is not being forced upon anyone, but is that the end of it?

Much like there is considerable debate over whether atheists are more or less likely than the religious to engage in criminal behaviour, I wonder whether the religious or atheists are most likely to lie under oath.  I also wonder whether the religious would be more or less likely to lie in court in if they used a secular as opposed to a religious oath.

The key question is, would a religious person feel less obligation to tell the truth if they used a secular affirmation rather than a religious oath.  Both the affirmations and oaths are examples of what Austin called performative speech; that is an utterance that is neither true nor false and an utterance, the saying of which is the doing of certain kind of act (e.g. I swear to…). On the purely linguistic level, there is no difference between the religious oath or the secular affirmation.  Furthermore, the secular affirmation in no way denies the existence of god, and one assumes that religious affirmers would still be operating under the religious diktats to honesty (such as they are), therefore I do not think that that switching to a secular affirmation for all would have any detrimental effects.  If anyone knows of any situations where the likelihood of people to lie under oath has been researched, I would love to know.

Advertisements

Crank magnestism and predicting the rejection of science

Tags

, , , ,

University of Bristol psychologist Stephan Lewandowsky and his colleagues have recently published an interesting article in the journal PLOS ONE, entitled The Role of Conspiracist Ideation and Worldviews in Predicting Rejection of Science.  The main findings of the research are that conspiracist ideation (i.e the tenancy to see conspiracies behind every corner) is a major predictor of anti-vacine views, and a minor predictor of climate change denial and opposition of genetically modified organisms (GMOs).

It seems that those who reject one aspect of science are more likely to reject other forms of science and that those who fall prey to various conspiracy theories are also more likely to reject the scientific consensus.  Lewandowsky’s article provides, at least in the American context, some explanation of this.  I would be interested to see this study repeated in the UK to see if there are any differences.

The full data set for the study is available, so, time permitting, I will try to explore some of the findings in more detail.

 

 

 

The irrationality of religion

Tags

, ,

The University of York Student Union recently held a debate, with the motion ‘this House Believes religion is irrational and today’s society would be better off without it.’ (See here for more details on the participants.)  What caught my eye, however, was an article by Andrew Brown on The Guardian blogs, who was arguing against the motion, in which he appears to create a whole new category in between rational and irrational.

First, however, Brown denigrates young, white middle class people’s knowledge of religion, before delivering up a nice tasty word salad:

All that they know about those religions is that the moderates are opposed to sex, which is nonsensical if not actively evil, while the extremists are nonsensical and evil in all kinds of other ways as well. Nor will they know many believers socially. Religiously fervent students are quite rightly shunned by everyone normal.

The trouble is that nothing in this harmlessly parochial view equips people to think about religion in general. And it can easily lead to the terrible confusion that follows when we consider religious belief, or religions, “irrational”.

The confusion here comes from supposing that whatever is not rational must be irrational and forgetting that there is a huge area of life where questions of rationality are wholly irrelevant. To ask whether religions are rational makes as much sense as asking whether they are pale green, or whether they taste like orange juice.

This is incredibly wooly thinking and does not make a whole lot of sense.  Given that irrational is the opposite of rational, I am unsure of what is neither rational nor irrational.  The tautological definition of ‘irrational’ is something that is not rational. I fail to see how it could be otherwise.

Brown then goes on to argue that there is a ‘huge area of life where questions of rationality are wholly irrelevant’ (I assume that this area includes faith and religion), To me this seems like an admission that religion is not rational, and is an indication that Brown may have been on the wrong side of the debate!

I think Brown’s position may better be described as ‘religion is not rational, but society would not be better of without religion’.  This is evidence of a problem I mention in the Survey Design section of my Research Guides, that of asking two questions in one.  The question ‘is religion is irrational and today’s society would be better off without it’, is really two questions in one.  The first question is ‘is religion irrational’ and the second is ‘would today’s society be better off without religion?’  Whilst I would agree with both of these statements, I think Brown would have had more luck at the debate if he was able to concede on the irrationality of religion (which he seems to agree with) but could still argue for the benefits of religion. Ultimately, badly framing the questions will lead to badly answered questions.

Unsourced JSA figures and the spectre of the scrounging Pole

Tags

, , , ,

A couple of news articles in the past few days caught my eye, one from The Telegraph and one from The Daily Mail.  Both articles focus on the same subject – the number of non-uk citizens claiming benefits, particularly Jobseekers Allowance.  Both articles also share another feature; neither cite their sources.  Both report various figures, such as this from The Telegraph:

New figures showed there were 407,000 non-UK nationals receiving the hand-outs last year, a rise of more than 118,000 since 2008, with the total bill running to hundreds of millions of pounds a year.

And this from the Mail:

Latest figures show that in February 9.2 per cent of all Jobseekers Allowance claimants were non-UK nationals.

It includes 35,000 from the EU, 35,000 from Africa, 33,000 from Asia and the Middle East and 6,500 from the Americas.

Poland tops the league table with 14,610 JSA claimants, followed by Pakistan 7,660, Somalia 7,120 and Portugal 5,860.

We are told that the figures in the Telegraph article come from a freedom of information request, but no other information is provided, whilst the Mail article is completely source free.  This makes it very difficult to verify the accuracy of the information provided, but it does seem fishy.

The most recent data that I can find that looks at country of origin and claiming Jobseekers comes from the thrilling titled Nationality at point of National Insurance number registration of DWP benefit claimants: February 2011 working age benefits, published by the DWP.  Some of the figures in the Telegraph and Daily Mail report are not to far away from the figures in the DWP report, although figure 4 shows that there were 6,390 Polish JSA claimants, rather than the 14,610 as claimed in the Mail’s article.

This might seem unimportant, but it is vitally important that the public can check the veracity of information in newspapers and other sources, particularly on such important issues as immigration and benefits.  This is all the more so, given that the underlying tone of both articles is ‘damn foreigners coming over here and stealing our benefits’.

It is worth addressing that last point in more detail – do foreigners claim benefits at a higher rate than UK citizens.  Latest figures show that there are 1,351,100 job seekers claimants in the UK, and a population of 63,705,000.  This means that 2.1% of the population claims JSA.  According to the Office for National statistics there were 686,540 Polish people living in the UK in 2011, of which 22,597 were unemployed.  If we assume that all 22,597 of the unemployed Poles claimed JSA (which is highly unlikely), that equates to 3.3% of the Polish population, and if we use the Daily Mail’s figure of 14,610, it equates to 2.1% of Poles in the UK claiming JSA.

Now, there are lots of caveats to these rough calculations (they are based on total population, not working age population, the figures on Poles in the UK are a couple of years out of date, etc.), however, they do give a very strong indication that the percentage of Polish people in the UK claiming Jobseekers Allowance is not very different to the percentage of all the people in the UK claiming JSA. This shows that there are not massive hordes of foreigners coming over to Britain to take benefits away from UK citizens, and exposes the ignorance of the majority of the commenters on the article on the Daily Mail website.

If the Daily Mail and the The Telegraph provided a link to their sources it would be much easier for the average reader to check the facts and figures. More detailed analysis would also compare the rates that foreigners and UK citizens claim JSA. Sadly, however, this kind of shoddy reporting and uncontextualised analysis is far too common in the media and exposing this is one of the aims of this blog.

Should you use Net Promoter Scores?

Tags

, , ,

Net Promoter Score (NPS) is a proprietary metric developed by (and a registered trademark of) Fred Reichheld, Bain & Company, and Satmetrix. It is widely used in market research as a measure of customer loyalty, often as an an alternative to measuring customer satisfaction.  NPS is based on one question:

How likely are you to recommend [our company/product/service] to your friends and colleagues?

The responses for this question range from 0 (not at all likely to recommend) to 10 (extremely likely to recommend), as the graphic below illustrates.

Those who provide a score of 0 to 6 are detractors, those giving a score of 9 and 10 are promoters whilst the remainder (7 and 8) are neutral. The NPS is calculated by subtracting the percentage of detractors from the percentage of promoters, and is usually given in the form of a percentage.

However, is this a useful metric, and does it provide useable information?  The remainder of this post argues that the NPS is a flawed metric as it is based upon flawed mathematics and flawed questioning.

The flawed mathematics

There are several ways that you can get the same NPS score, for example, 20% promoters/ 80% neutral/ 0% detractors and 60% promoters/ 0% neutral/ 40% detractors both give you an NPS of 20, even though the responses are massively different. (In the first example 20% promoters minus 0% detractors = 20 and in the second 60% promoters minus 40% detractors = 20).

The scoring therefore masks massive differences in the views of respondents.  With a scoring metric like this it is my opinion that this is a fundamental problem.  A company who has no detractors and 100% promoters or neutrals is doing something markedly different from a company that has 60% promoters, as well as 40% detractors.

The flawed questioning

As I have already said, the question asked is ‘how likely are you to recommend [our company/product/service] to your friends and colleagues?’ If the respondent gives an answer of 0 to 6 they are seen as detractors; however, just because someone is unlikely to recommend a company or product to friends, it does not automatically follow that they will recommend against the company or product.  If someone gives an answer of 5, they have picked the middle value and may feel that that they would recommend the company or product to some extent.  This, however, would not be reflected in the NPS.

Only the top two responses (9 and 10) are seen as promoters; however, respondents can be very reluctant to give the highest scores, even if they are likely to recommend and satisfied with the company.  From my experience with various customer satisfaction surveys, some respondents will never say that they are very satisfied (saying instead that they are quite satisfied) because they do not give the top mark on principle; there is always room for improvement.

Alternatives

If, as I have argued, the NPS is flawed, what should you use instead? I suggest that you stick to traditional measures of customer satisfaction (how satisfied or dissatisfied with company/ product are you?), using a 5-point scale.  If you are interested in likelihood to recommend, (which is important to understand), I suggest that you ask the question ‘how likely are you to recommend company/ product to a friend?’, but using the following scale (or something similar):

  • Very likely to recommend
  • Quite likely to recommend
  • Not likely to recommend for or against
  • Quite likely to recommend against
  • Very likely to recommend against

If you must use the conventional NPS question and answers, think about adding a ‘why do you say this?’ follow-up question to try to understand people’s responses better. Also, in addition to calculating the metric in the standard way I would also suggest looking at the averages (mean, mode and median), as well as the standard deviation. This will give you a much better idea of the spread of responses than the NPS on its own.

Failures in open access journals

Tags

, , ,

Peer review is central to academic publishing; it is what enures that the work that is published has merit.  Peer-reviewed journals are the main way that academic knowledge is disseminated.  There are, however, many problems with this system, one being the cost of journal articles to the non-institutional reader.  Whilst university libraries will have subscriptions to many journals and make them available to students and staff, members of the general public have to pay a large sum (such as £30 per article) to gain access to the article without a subscription.  To attempt to remedy this situation there has been a massive increase in the number of open access journals where content is freely available online.

Many open access journals require a fee before an article is accepted for submission, whilst more require a fee for the publication of a successfully submitted article.  This model means that the publishers get their money from the authors rather than those accessing the articles.  However, it seems that that are problems with the rigorousness of the peer-review system for some of these open access journals.

A science journalist at Harvard University, John Bohannon, has conducted a fascinating study where he submitted a ficititious scientific paper to 304 open access journals over 10 months. The journals were worldwide and included some published by respected academic publishers such as Elsevier, Wolters Kluwer, and Sage.

Bohannon used fabricated names and fabricated academic institutions, for example, Ocorrafoo Cobange, a biologist at the Wassee Institute of Medicine in Asmara.  The paper was written to be superficially credible, but with flaws that any competent reviewer should be able to see.  Different variations of the article were submitted to the 304 journals, but they all took the format: Molecule X from lichen species Y inhibits the growth of cancer cell Z.

Bohannon wrote several flaws into the paper, an obvious one being that the caption of a data plot claims that it shows a “dose-dependent” effect on cell growth (which is central to the point of the paper), however, the actual data clearly shows that there is no such effect. Secondly, there was no effective control for the experiments; control cells were not exposed to radiation like the test cells were.  Any suggestion, therefore, that molecule X has any effect cannot be concluded from the experiment.

The final flaw should be evident to most people, not just specialists; the paper reads:

In the next step, we will prove that molecule X is effective against cancer in animal and human. We conclude that molecule X is a promising new drug for the combined-modality treatment of cancer.

Such a drastic conclusion from such minimal data, combined with exaggerated claims for the future of molecule X should raise multiple red flags about the author and the research as a whole.

To date, the paper has been accepted by 157 of the journals and rejected by 98. Of these 255 versions that went through the entire editing process to either acceptance or rejection 106 went through peer review (whilst the remainder did not). 70% of the peer reviewed journals accepted the paper; this does not bode well. To its credit, the only journal that drew attention to potential ethical problems with the research and rejected the paper was Public Library of Science, PLOS ONE.

So that the articles were not actually published, Bohannon withdrew them after they were accepted. It is, however, very worrying that such substandard research was almost published and brings the open-access journals into disrepute. It shows that peer-review is by no means a perfect system for ensuring that only valid research is published and draws attention to the plethora of poor quality open access journals.

See Bohannon’s article in Science for more on his study.

Designing the Scottish Independence Referendum Question

Tags

, , , ,

Originally published on the RMG:Clarity blog on January 27, 2012

It was reported on Wednesday that the Scottish First Minister Alex Salmond has come up with what he thinks should be the question asked in a referendum on Scottish independence.  The Scottish National Party leader wants the question to be:

Do you agree that Scotland should be an independent country?

Mr Salmond described this question as ‘short, straightforward and clear’, however, it is a one sided question and is therefore not appropriate for a referendum.  It presents a position (Scotland should be an independent country) and asks people to agree with it, rather than make up their minds between two opposing positions.  In the 1997 devolution referendum in Scotland voters were given two ballot papers, one on the principle of devolution and one on tax powers.  Each ballot paper presented two views and the voter had to tick one.  The statements used were:

  • I agree that there should be a Scottish Parliament, OR
  • I do not agree that there should be a Scottish Parliament.

and

  • I agree that a Scottish Parliament should have tax varying powers OR
  • I do not agree that the Scottish Parliament should have tax varying powers.

These questions are two sided, and present both the ‘yes’ vote and the ‘no’ vote with equal weight and do not lead the voter down a particular path.

As experienced researchers, we know that with any survey it is very important to ensure that questions are designed in such a way that they are not leading the respondent to a particular answer.  When it is a referendum that will decide the fate of a country it is even more important that the question is fair.  It is my hope that the Electoral Commission will oversee and regulate the referendum and ensure that a fair and balanced question is asked of the Scottish people.  If not, any result will lack legitimacy; those pushing for a ‘no’ vote will be able to argue (and rightly so) that the referendum was biased against them from the outset.

Mobiles and acorns

Tags

, ,

Originally published on the RMG:Clarity blog on June 8, 2011.

What do the likelihood of mobile phones being carcinogenic and a lack of knowledge about the countryside have in common?  Well, both have been in the news recently, for example, mobile phones in The Guardian and knowledge of nature in the Daily Mail.  More importantly, however, the stories the newspapers are reporting are both from press releases and neither give sufficient information to evaluate the merits of the research.

The World Health Organisation press release from the International Agency for Research on Cancer says that:

From May 24–31 2011, a Working Group of 31 scientists from 14 countries has been meeting at IARC in Lyon, France, to assess the potential carcinogenic hazards from exposure to radiofrequency electromagnetic fields.  …  The IARC Monograph Working Group discussed the possibility that these exposures might induce long‐term health effects, in particular an increased risk for cancer.  …  The evidence was reviewed critically, and overall evaluated as being limited among users of wireless telephones for glioma and acoustic neuroma, and inadequate to draw conclusions for other types of cancers.

Leading on from this mobile phones have been categorised as Category 2B on the IARC carcinogen scale.  This means that there is ‘limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals.’  It is this that has been published in one format or another all over the world.  However, this says nothing about what the monograph actually says.  It does not inform the reader how the conflicting evidence on mobile phone use and cancer has been resolved.  The whole point of the press release, and all the media coverage, is that mobile phone radiation is a possible cancer risk.  However, nowhere does it say how this may ‘possibly’ happen, nor does it describe the biological mechanism by which the low levels of microwave radiation emitted from mobile phones could cause cancer.  At some point the monograph will become available on the IARC website, but I don’t imagine that journalists will revisit their articles in the light of any new information in the full monograph.

In a press release for the run up to Open Farm Sunday, research commissioned by LEAF (Linking Environment and Farming) and published in the Daily Mail found that:

Fifteen per cent of adults didn’t know that a dairy cow is female, half weren’t aware that robins live in the UK all year round and one in five didn’t know that acorns come from oak trees.  …  Additional findings show that a quarter of 18-24 year olds and one in five 25-34 year olds didn’t know that tadpoles become frogs.

The only thing that the press release said about the methodology is in the notes; ‘all stats One Poll Survey, 2,000 adults May 2011’.  We are not told the methodology of the survey, was it online, face-to-face, or over the telephone?  We are not told whether the people interviewed were from all over the UK, or whether they were from more rural or urban areas, nor are we told how many people between the ages of 18-24 and 25-34 were interviewed.  You will get very different results asking about acorns in inner city London and rural Surrey, for example.  Only surveying 10 people between the age of 18 and 24 would not give significant results, whilst surveying 200 would.

None of the above is meant to suggest that either the IARC work on mobile phones or the LEAF survey is unreliable, invalid or wrong in any other way.  My point is that we do not currently have sufficient information on the methodology and workings of either piece of research to come to a conclusion about its validity.  Whenever presented with research findings it is vital to have full details of the methodology used; if this is not available, you should start questioning it yourself.

Do we need the Census?

Originally published on the RMG:Clarity blog on April 1, 2011

By now, most people in the UK will have filled out their census returns, either online or on paper, and returned them for the massive number crunching exercise to begin.  But is the census necessary?  Some people and groups contend that it is too expensive, unreliable, and an Orwellian intrusion into peoples’ lives.  Take this quote from Daniel Hamilton, the Campaign Director of Big Brother Watch:

The census includes intrusive questions on your proficiency in English, your health, when you last worked, the identities of your overnight guests and the type of central heating you have.  The government has no need – and no right – to know this information about you.

This census is a monumental waste of time and money.  A large number of the questions duplicate data already held by the authorities on databases such as the electoral register, school records, tax returns and GP information.

Or this from campaign group NO2ID, from their list of ‘10 Census Lies:

 1. The Census is essential for government and business planning

On the contrary, it is worse than useless because it is expensive, inaccurate, and quickly out of date.

6. The census results in high-quality information.

No one knows how many people lie in their return. The 2001 census is generally believed to have ‘missed’ around 900,000 men under 40.

In a 9 July 2010 article, The Telegraph announced that the then Shadow Minister for the Cabinet Office, Francis Maude wished to scrap the census:

Mr Maude said the Census was “out of date almost before it has been done” and was looking at ways to count the population more frequently — perhaps every five years — using databases held by credit checking firms, Royal Mail, councils and Government.

“This would give you more accurate, much more timely data in real time. There is a load of data out there in loads of different places,” he said.

There are, then, several arguments against the census, from various directions, including cost (£482million), privacy, accuracy, completeness and duplication of data.  It should be noted that arguments such as these have been in circulation since the very idea of a UK census was raised over 250 years ago, but are they valid?

£482million may seem like a massive figure to spend on a census (and is a large increase from £259million in 2001), but how does that compare with, say, the 2010 US census, which is only 10 questions long?  The cost of the UK 2011 census breaks down to a cost of 87 pence per person, per year (over the life of the census), or £8.70 for the full ten years.  The 2010 US census cost $14.5billion (c. £9.06billion), which equates to $46.96 (c. £29) per person per ten years; this is over three times the cost of the UK census, and for a much shorter survey.  When looked at in these terms the relative cost of the UK census does not seen quite so excessive.

The aim of the census is to provide a snapshot of the UK population at one particular time, rather than to track population change through time.  This means that while the criticism that the census is out of date as soon as it is done is fair, it is not really the point.  By obtaining the same information about everyone in the UK (or as near to everyone as is possible), at the same time, one can make the comparisons about lifestyle and economic status that are central to future planning.  Data duplication is also a valid concern; there is little point in the government collecting data via the census that they could get from other sources, however, this other data would have to be at least as complete as census data.  Since being in government, Francis Maude has had to change his views, at least for the time-being; as he said on 26 July 2010, ‘The current advice from the ONS is clear.  Census alternatives are not sufficiently developed to provide now the information required to meet essential UK and EU requirements.’

As David Hamilton suggests, the authorities may have duplicate data on school records and GP information, for instance, but one wonders whether he would be supportive of the kind of data sharing that would be necessary to collate all this information.  Similarly, to create and maintain a database that contained all of this linked information and kept it up to date and secure would be a monumental undertaking that would probably exceed the cost of the census.

There are famous examples of people avoiding the census; the suffragettes in 1911, anti-poll tax protesters in 1981, and the mysterious omission of 900,000 men under 40 in 2001.  This however, does not stop the census being the most complete survey of everyone living in the UK.  Whilst it is impossible to ensure that there is a 100% completion rate, it is high enough for the purposes for which it is required.  The census will provide a large body of data that the government will use to direct its spending and, perhaps more importantly, its funding cuts; without this kind of data government cannot begin to apportion funding.

Some of those who argue against the census contend that there are other large scale surveys that could be used for some of the data instead; this however, is a false economy.  When any research company, RMG:Clarity included, wants to carry out a quantitative survey, on any topic, one of the first things that has to be worked out is the sampling – how many people do you interview in a particular area?  Take our poll on the general election last year; if we had randomly interviewed 1000 people in Wales and left it at that, the results would have been very different.  We could easily have ended up mostly speaking to people in the South-Wales Valleys, and other Labour heartlands – this would obviously have affected the result of the poll.  To counteract this, we produced a target number of interviews that we needed to achieve in each constituency; these target numbers were directly proportional to the number of people of voting age in each constituency.  The source of this data, was, of course, the 2001 census.  (On top of this we also weighted the final results by other factors such as age and gender so that they matched the census figures.)  Using this process we were able to predict the outcome of the election to within +/- 2 percentage points.  The famous pollster George Gallup said that sampling a population was like taste-testing soup; if the soup is well stirred, one spoonful can reflect the taste of the whole bowl.  He could have added that a census provides the recipe for the soup.

So, is the census necessary?  It is certainly necessary for the government to know the basic facts about its population.  Without knowledge of the population size of council areas, for example, government cannot fairly decide how much money to allocate to each council.  What remains to be seen, however, is whether the next ten years will produce a more effective, and cheaper, methods of collecting this valuable data.

Measuring Wellbeing and Happiness

Tags

, , ,

This was originally posted on the RMG:Clarity blog on November 25, 2010.

David Cameron has launched a project to determine the nations’ wellbeing.  The Office for National Statistics is to lead a National Wellbeing Project to find out both what makes people happy and how happy the UK is as a whole.  In saying that GDP is ‘is an incomplete way of measuring a country’s progress,’ Cameron is correct; there are many other factors that can impact this.

However, the ONS is likely to find that it is very difficult to measure such an abstract thing as wellbeing.  One option is to ask a lot of people up and down the country, from all walks of life a question along the lines of ‘overall, how would you rate you wellbeing over the last month/ six months/ a year?’ with a simple rating scale accompanying it; this is subjective wellbeing.

This on its own, however, is not enough; the researchers would also need to know some demographic information and other information about the respondents’ experiences in that time period in order to bring a level of objectivity to the index.  In order to produce a wellbeing scale that is comparable to GDP, it is necessary to find a way to correlate various life events ad experiences with subjective wellbeing.

Part of the difficulty with such this project is coming up with a list of all the factors that could potentially affect ones wellbeing.  Such factors can be related to the economy, such as:

  • Job security
  • Job prospects
  • Mortgage security
  • The cost of living

Some of these factors are relatively easy to measure; for example it might be possible to tie in the Retail Price Index with people’s subjective accounts of their wellbeing.  However, there are many other factors that can affect ones wellbeing that are harder to measure, such as:

  • Educational achievement
  • Health
  • Size of social circle
  • Connection with local community
  • Relationships with family/ partner
  • Religiosity

This is where measuring wellbeing can get very complicated; for example, for some people having very high academic qualifications might be central to their subjective wellbeing, whilst for other people it is not particularly relevant.  Similarly, having a religious belief might cause people to report a high level of wellbeing, but one cannot extrapolate from this that those with no religious belief would report a low level of wellbeing.

Another major difficulty is factoring in age.  What causes a high level of wellbeing in people in their late teens and early twenties is unlikely to be the same as what causes a high level of wellbeing in those over eighty.  Furthermore, there is the old problem that correlation does not imply causation; something may correlate with wellbeing but not cause it.

In short, producing an index of wellbeing that, like GDP, is comparable both over time and with other countries is not impossible.  It is, however, a very great challenge and we await with interest the outcome of the National Wellbeing Project.