Editorial Guidelines issues
This guidance note discusses how to report statistics. Its purpose is to highlight some of the pitfalls and offer guidance on how to interpret and report figures in our output correctly. It is not intended to provide comprehensive advice about how to calculate statistical problems. Advice in assessing the creditability of data-based stories; statistical checking or how to report statistics can be sought from Robert Cuffe, the Head of Statistics, 麻豆约拍 News (robert.cuffe@bbc.co.uk) and the 麻豆约拍 centres for data journalism in each Nation.
This guidance note relates to the following Editorial Guidelines:
- Accuracy
See Editorial Guidelines Section 3 Accuracy
- Impartiality
See Editorial Guidelines Section 4 Impartiality
- Politics, Public Policy and Polls
See Editorial Guidelines Section 10: Politics, Public Policy and Polls: Opinion Polls, Surveys and Votes
In addition, the Editorial Policy Guidance Notes on Surveys, Opinion Polls, Questionnaires, Votes and Straw Polls and Removal of 麻豆约拍 Online Content may also be relevant.
Elsewhere on the 麻豆约拍
- 麻豆约拍 Academy online training Reporting Statistics [麻豆约拍 staff only]
- 麻豆约拍 Academy page about the use of Big Numbers by Robert Peston and Mark Easton
- 麻豆约拍 Training checklist 鈥淢aking Sense of Statistics 鈥 10 Golden Rules鈥
Elsewhere on the Web
- Royal Statistical Society online course
- Royal Statistical Society
- , UK independent fact-checking charity
- By Professor Steve Doig, Arizona State University
Key points
- We should reserve the same scepticism for statistics as we would for facts or quotes. Avoid taking statistics at face value.
- We shouldn鈥檛 always rely on press releases, but look beyond the headlines, asking the producers of statistical information how figures were arrived at to assess their credibility.
- When our output includes statistics, they must be accurate and verified where appropriate, with important caveats and limitations explained.
- When explaining statistics, we should put them into context; a number used on its own is rarely meaningful.
- We should avoid contributors presenting competing statistical claims without any analysis or interpretation about the veracity of those claims.
- Where statistics are misused or wrong, we should challenge and correct them, particularly where they are central to an argument over a controversial issue.
- We should weigh and interpret statistics helping audiences to judge their magnitude and importance. We should assess whether results are 鈥榮tatistically significant鈥 or due to chance and consider if a 鈥榮tatistically significant鈥 figure is of 鈥榩ractical significance鈥 to our audiences.
Guidance in full
- Introduction
- Sources
- Press Releases and Looking Beyond the Headlines
- Contextualising statistics
- - Averages
- - Big and Small Numbers
- - Outliers
- - Projections
- - Rising or Falling Numbers
- - Regression to the Mean
- - Percentages and Percentage changes
- - Correlation or Causation?
- - Misleading Graphs
- - Selective Comparisons
- - Risk
- Statistics in Debate
- Statistical Significance 鈥 How sure are we?
- Transparency
- Corrections
Introduction
Statistics are a great source of information which can lead us to strong stories, provided we ask the right questions and are aware of the pitfalls. All producers of statistics should be able to justify their figures and conclusions and explain any assumptions upon which they are based. So it鈥檚 good practice to speak to the person or organisation who calculated the statistics reserving the same scepticism for numbers as we would for any fact or quote. You don鈥檛 need a degree in maths, just a bit of common sense.
There are a few top-level questions we should usually ask of the producers of statistics:
- WHO has produced the statistics? How reliable is the source?
- WHY have the statistics been produced and why now? What question is the statistic an answer to? Does the source have a vested interest or hidden agenda?
- HOW have the statistics been compiled? What exactly has been counted? Are the underlying assumptions clear?
- WHAT does the statistic really show? Does the study really answer what it set out to test? What are the producers of the statistics not telling you? Avoid automatically taking statistics at face value.
- WHERE can you find the underlying data and is it available?
When our output includes statistics, we should explain the numbers, put them into context, weigh, interpret and challenge and present them clearly. The statistics must be accurate and verified where appropriate, with important caveats and limitations explained. We should use a range of evidence to put statistical claims into context and help audiences to judge their magnitude and importance. Where claims are wrong or misleading, they should be challenged.
Sources
All Official Statistics should be produced impartially and free from political influence. The Office for National Statistics is the country鈥檚 largest independent producer of Official Statistics and a highly reliable source. Central Government departments and agencies, the devolved administrations in Northern Ireland, Scotland and Wales and other Crown bodies also produce Official Statistics. The data these bodies collect is subject to assessment by the independent UK Statistics Authority. Public bodies also produce a category of Official Statistics called . National Statistics come with an accredited kite mark, meaning they meet the standards set by Code of Practice for Official Statistics and are assessed by the Office of Statistics Regulation, which is part of the UK Statistics Authority.
Other reliable sources may include university research departments or independent think tanks, like the Institute for Fiscal Studies. Consideration may also be given to whether a source has proved reliable in the past.
Peer-review of research published in scientific journals is an indication of reliability though may not guarantee it, or prevent publication of invalid or even fraudulent results. You also need to be aware that one piece of research may not present the whole picture: studies with positive findings are more likely to be submitted to and published by journals than ones where no effect was shown. This sort of 鈥榩roduction bias鈥 may distort the overall narrative.
Sometimes organisations or individuals, such as politicians, may mislead with their use of statistics, exaggerate or only present statistics selectively to support their claims or policies. Some organisations may be funded by a body with a vested interest in the information. For example, a company or government department that wants you to believe its product or policy is the best. It may have a hidden agenda and a particular reason why it is reporting certain results now. Or it may be hiding negative results or failed studies, where the efficacy of what was being tested was not established or replicated. So you may also need to consider what the source is not telling you, and why.
It is therefore good practice to check the numbers with the primary source and avoid using statistics as reported by a third party, unless it is editorially justified. This may include, visiting the website, speaking to the person who compiled the data and reading the study, paying attention to how it was designed. Consider whether there is alternative evidence and check questionable data with experts. Avoid publishing data from a biased source unless you have substantial corroborating evidence or there is a clear editorial justification for publishing.
(See: Editorial Guidelines Section 3 Accuracy)
For further discussion about how to evaluate statistics from sources, such as surveys and polls see separate guidance note.
(See: Editorial Guidance Opinion Polls, Surveys, Questionnaires, Votes and Straw Polls)
Press Releases and Looking Beyond the Headlines
Press releases can alert us to good stories, but they can also contain exaggerations or use statistics selectively. So we shouldn鈥檛 always rely on them or take them at face value. We should regularly look beyond the headlines, asking the producers of the statistical information how figures were arrived at to assess if they seem credible. One of the requirements for accredited National Statistics, is that when data is used in official public statements, that the statistics behind them are published in a transparent way to maintain trust in official figures. We should be wary of reporting statistics from any source where the underlying data or analysis is not in the public domain and therefore not open to scrutiny.
Looking beyond headline figures, which often focus on averages or the UK as a whole, can reveal new stories. Consider, for example, comparing different sectors of the economy or different groups in society, or linking changes in economic performance to population growth to reveal different aspects of economic growth. While geographic breakdowns at national, regional or local levels, can strengthen reporting on the devolved UK (for further discussion about comparisons see Comparisons below).
Contextualising Statistics
Statistics can easily be overstated and exaggerated to make a story look dramatic. So it is important that we use statistics accurately, explaining any caveats and limitations where appropriate. We should report statistics in context to make them meaningful and ensure our audiences understand their significance, taking care to avoid giving figures more weight than can stand scrutiny. Beware of using statistics in headlines where figures are rarely meaningful without context.
The paragraphs below about averages; big numbers; outliers; rising or falling numbers; regression to the mean; percentages and percentage changes; correlation or causation; misleading graphs; selective comparisons and risk all discuss how to contextualise, explain and interpret statistics in more detail as well as the pitfalls to avoid when understanding and presenting them.
Averages are useful ways of summarising lots of numbers in one figure, but the term 鈥榓verage鈥 is not necessarily what is 鈥榯ypical鈥. Statistically, there are several types of average, the most common of which are: the mean (the sum of all the numbers divided, by how many numbers there are) and the median (the middle point, when the numbers are sorted into ascending order). Each average measures something different and using them interchangeably or in the wrong context can result in a misleading report.
The mean is a useful calculation to show the 鈥榓verage鈥, (or central value) provided the sample is representative and sufficiently large. But with skewed distributions such as salaries, where some wages may be much higher or lower than most, the mean may be disproportionately affected in a misleading way. For example, if the mean income of ten middle-managers in a pub is 拢50,000 and then Bill Gates, who earns 拢1 billion, walks in, the mean income suddenly shoots up to 拢100 million. The mean income is now far higher than the actual earnings of everyone in the pub, other than Bill Gates. Using the mean here gives a false impression of the 鈥榯ypical wage鈥.
Where a data-set contains a few extremely high or low values and is unrepresentative or insufficiently large, then the median may be a more accurate and appropriate way to represent the central value. The median income, for example, would be the one halfway through the list of incomes lined up from smallest to largest.
A change in an average does not necessarily mean a change for an individual. For example, if average wages rise, this does not mean that all people in the distribution get paid more; some people may not have seen an increase in their wage at all because a mean value does not reflect income distribution.
Take care when distinguishing which average has been used as you interpret results. Consider whether the correct average has been applied to the information you are trying to find out. Choose the appropriate average carefully and explain your choice and on what it is based, including any outliers where they occur. . Avoid using the term average to mean 鈥榦rdinary鈥 or 鈥榥ormal鈥 and avoid average meaning 鈥榤ost people鈥, unless it means that.
Sometimes averages may not be the most revealing information to present to the audience. For example, if the 鈥榓verage鈥 wage has gone up by 2.3% what does that mean to the wages at the top and bottom of income distribution? (For further discussion about comparisons see Comparisons below).
Just because a number is very big or small does not make it substantial. Big and small numbers are difficult to understand without any context. Millions or billions are not part of our everyday experience so it is not easy to judge if they are actually big or not. (See Being Clear About Significance below)
To make sense of big numbers we should put them in context and divide by the number of items to which they relate or people they affect. For example [1], an annual figure measuring public spending is better expressed in human terms by dividing by the population. This will give you a more meaningful measure of what the figure represents per person per year. Or an increase of government spending on nurseries should be divided by the number of 3-4 year olds in the population.
We should avoid using the most extreme number, big or small, to make a story more dramatic, unless it is put into context. For example [2], you may think a government promise to spend 拢300m over five years to create a million new childcare places is a lot of money equalling 拢300 per place. But when you work it out per year (divide by 5), that鈥檚 拢60 annually and only 拢1.15 per week (divide by 52).
Outliers, or the most extreme and unexpected numbers (large or small) that don鈥檛 fit the mould in a data- set, should be treated with an additional level of scrutiny.
Often outliers can be chance phenomena or due to experimental abnormality, data error or measuring mistake. As such, or if they are simply unlikely, they may not reveal anything unusual or scientifically significant at all and a story based on such an outlier may need to be rejected.
But not all outliers are mistakes and these unrepresentative numbers might mark something significant.
So where there are outliers, consider how likely it is that the outlier is actually true and if it is realistic, given existing evidence. Do the explanations for the possible causes of the outlier seem credible? If in doubt, ask the producer of the statistic.
Take care when interpreting projections, explaining any caveats or qualifications. Projections and forecasts are typically presented as a range of possibilities because we are uncertain of future events. We should give a balanced view of the possible ranges and focus on the most likely number, given what else is known, rather than the most extreme value. We should avoid headline phrases such as 鈥榰p to; 鈥榓s much as鈥; 鈥榗ould rise鈥 or 鈥榗ould be as high as鈥 or 鈥榤ay reach鈥, where the projections are based on the most extreme value, unless there is an editorial justification for an interest in the absolute maximum value.
We should avoid reporting rising or falling numbers without saying what they rose to or fell from.
We should recognise that numbers can go up as well as down and avoid attaching too much importance to chance results. A high number could also be part of a falling trend, so we need to take care when drawing conclusions from a peak, as it may not represent an upward curve. When a number reaches an unusual high, it鈥檚 likely to fall to a more typical number next (unless, say, it represents the start of an epidemic). When exceptional high or low values return to more typical values over time, statisticians call this 鈥榬egression to the mean鈥.
Unusually high or low measurements in repeated data tend to be followed by measurements that are closer to the mean (see Averages above). This is because most values are closer to the mean than the extreme ones. Failure to appreciate this can lead to misleading interpretations and conclusions. Consideration should always be given to regression to the mean as a possible cause of an observed change. You should not draw conclusions about the likelihood of future events on the basis of one extreme result.
You should also be sceptical about interventions to deal with circumstances vastly different from the average, which appear successful due to regression to the mean. For example [3], the introduction of a speed camera following a spike in car crashes may appear to explain the reduction in accidents the following year. However, this fall back to the norm, may have happened anyway, regardless of the presence of the speed cameras. Other factors should also be considered, such as chance or the improvement in road layout and car safety.
In healthcare, regression to the mean can result in wrongly concluding that a result is due to a particular treatment, when it is actually due to chance. For example, the reduction in the incidence of illness following the introduction of a vaccination programme to counter an outbreak of a new disease, may be explained by regression to the mean, particularly where the programme started at the height of an outbreak.
Percentages and Percentage Changes
Percentages can be a helpful way to describe data in a meaningful way, providing they are used correctly and properly contextualised.
Do not confuse percentage differences with percentage points.
When you are subtracting one percentage from another, the term percentage point should be used. For example, an increase in interest rates from 10% to 12% is a rise of 2 percentage points. This is an absolute change.
When you want to discuss a relative change (i.e. an increase or decrease relative to your starting point, which is a fraction of the original value), you express this as a percentage. For example, the price of a product which has risen from 拢10 to 拢12 has increased by 拢2 or 20%.
Take care to avoid people thinking percentage increases are bigger than they actually are. For example, you could add clarity to the statement that interest rates went up from 10% to 12%, by reporting an interest rate rise of 2 percentage points, which means a 20% increase in interest payments.
Where possible, try to avoid expressing increases or decreases in percentage terms. Use doubling or trebling instead. Also avoid using percentages which are more than 100% as audiences may not immediately understand that a 200% increase is a trebling in value.
Where there are changes in statistics you should include the context such as the start or end points. For example, a doubling in reported crime when there has only been one knife attack, means there were actually only two reported incidents, which is far less worrying.
Correlation can be co-incidental and is not the same as causation. A positive correlation is when two sets of data move in the same direction at the same time. But just because there is a change in A does not mean it is the cause of a change in B.
Correlations can be found in lots of data and can be quite co-incidental. For example [4], the number of films that Nicolas Cage appeared in correlated with the number of deaths that occurred in swimming pools during a ten year period. This is chance and a spurious correlation and it is highly unlikely the one caused the other.
Sometimes there are other reasons which explain the correlation. Shoppers in the UK tend to spend more money in shops when it is cold and less when it is hot. However, this may not mean that the cold weather causes people to shop. That Christmas and the sales coincide with winter is a far more likely explanation.
Attributing causality has a high threshold. As stated by Guidance published by the Government Statistical Service, it requires demonstration that
- A is correlated with B
- A happened before B
- All other plausible causes of B have been ruled out.
Causality can often only be determined by rigorous scientific examination, such as a randomised controlled study, in which people are randomly put into two or more groups. Each group is assessed as they receive different interventions including a control intervention intended to represent no change. Only then can the outcome for each group be attributed to the difference between the interventions. For example [5], US studies suggested that juveniles at risk of offending were unlikely to do so if they visited prisons and witnessed the harsh realities of life inside. The programme claimed a 94% success rate. However these only collected data for those who took part in the Scared Straight programme. It was not until randomised controlled trials looked at the offending behaviour of juveniles who did not access the programme that it became clear it was ineffective and in some cases juveniles were more likely to be involved in crime.
Take care when looking for explanations of correlated data. Avoid factoring in your own bias and preconceptions. Consider if causality was actually examined by the study. If there is no other evidence to support causality we should normally only report the existence of a correlation, or not report it at all, unless editorially justified.
Take care when interpreting graphs and charts. They are helpful tools for visually displaying large amounts of data quickly, but can be used deliberately to mislead or shock by distorting the data. Examples include cases where:
- the vertical scale (y-axis) is too big or too small, or misses out numbers or goes up in uneven steps or does not start at zero;
- the graph is incorrectly labelled;
- data is deliberately left out to support an argument;
- sizes of symbols in a pictograph are not uniform;
- pie charts show similar sized pieces for different values or include values which do not add up to 100%;
- or selective start and end dates are chosen to represent a change over time.
Comparisons can help numbers which may be meaningless in isolation, make more sense. For example, reporting that German GDP has increased by 0.3% is more meaningful if audiences are told which time periods are being compared or how large German GDP is, or how the change compares with other European countries.
Failure to look at comparisons can highlight other contextual problems. For example, 584 unwanted pregnancies from one type of contraceptive is not so significant when compared with the much higher failure rates of other contraceptives, making it possibly the most effective form of contraception to use.
But, comparisons of any kind are often fraught with difficulties. To avoid bogus comparisons make sure the same groups are being compared over the same time period and that the activity being compared is also the same. Consider the comparison carefully before accepting it as evidence.
Beware that changes in measuring systems or recording standards can invalidate comparisons over time. For example [6], an apparent spike in violent crime in 2008/09 can be explained by changes introduced in 2002/03 to the way some offences were logged by police; it was not part of a rising trend when compared to violent crime in the late 90s. Any comparison of police recorded crime statistics over time without explaining this qualification is likely to mislead.
Take care with league tables such as hospitals or schools. A single statistical measure is unlikely to be a valid basis for comparing one hospital or school with another. A teaching hospital may have a worse score, but only because sicker patients are referred to it. A school may perform better because it reflects the socio-economic intake of the pupils.
Exercise additional caution with international comparisons where what is being counted may be measured in different ways.
The reporting of risk can have an impact on the public perception of that risk, particularly with health scares or crime stories. Misleading reports about health risks may cause individuals to alter their behaviour in ways that could affect their health. While a report that distorts the risks about being a victim of crime may increase people鈥檚 fear unnecessarily.
We should report risks in context, taking care not to worry the audience unduly, especially about health or crime. Headlines which may alarm or worry unnecessarily should be avoided.
We should consider the emotional impact pictures and personal testimony can have on perceptions of risk when not supported by the balance of evidence. If a contributor鈥檚 view is contrary to majority opinion, the demands of due accuracy and due impartiality may require us to make this clear.
Increased or Decreased Risks
If a risk has increased or decreased, audiences need to know how risky it was in the first place, otherwise they won鈥檛 know if a change in risk actually matters. For example [7], a report suggesting a 20% increase in the risk of getting colon cancer from eating an extra ounce of red or processed meat a day sounds dramatic. But it omits vital information. It鈥檚 not enough to know how the risk of getting colon cancer changes if we eat bacon everyday (the relative risk); the audience also needs to know what the risk of getting colon cancer was originally (the absolute or baseline risk). If the likelihood of developing colon cancer at all, is 5%; an additional risk of 20% of that baseline risk is only one percentage point, meaning that your lifetime (absolute) risk of getting colon cancer is now 6%. Knowing that, may mean you choose not to give up eating bacon every day.
Where the baseline change in risk is small, despite a dramatic headline figure in a press release suggesting a larger relative risk, we should consider the editorial justification of reporting such a story. Where there is editorial justification for reporting changes in risk, it would be meaningless if our reports did not include the baseline risk. If the baseline information is not available, consider asking for it.
Expressing risk as a percentage should also be considered carefully as it may be too abstract. It is easier for audiences to understand what it might mean for a group of people. For instance [8], in the colon cancer example above, about 5 men in 100 are likely to get the disease during their life. If they all ate bacon every day, about 6 would. So only 1 extra man per 100 will get colon cancer if they eat bacon daily; consider asking how many extra people per 100 or per 1000 might be affected by the risk.
(It should be noted that 1,000 out of 10,000 sounds like a higher risk than 1 out of 10 and should be avoided. If comparing risks, the same denominator should be used. For example, 2 out of 100 compared with 10 out of 100, rather than 1 in 50 compared with 1 in 10.)
Checklist
Research carried out by 麻豆约拍 journalists Sue Inglish and Roger Harrabin with the Kings Fund [9] indicated concern among scientific experts about the potential of media coverage to distort risk and create disproportionate fear. Using the following checklist can help ensure the context of statistics is clear and avoid distortion of the risk.
- What exactly is the risk, how big is it, and who does it affect?
- Can the audience judge the significance of any statistics or other research? Is the reporting clear about how any risk has been measured - for example the size of any research sample, margin of error, the source of any figures and the sponsor of the research?
- If you are reporting a change in the level of risk, have you clearly stated the baseline figure i.e. what the risk was in the first place? (A 100% increase or doubling of a problem that affects one person in one million will still only affect two in a million.)
- When reporting relative increases or decreases in risk, have you also included the absolute change? (A 20% relative increase in risk for a particular group may only increase the absolute risk of getting a disease by a much smaller number.)
- Have you expressed the risk in human terms, rather than percentages? (5 in 100 people at risk of developing a disease is easier to understand than a 5% risk.)
- Is it more appropriate and measured to ask "How safe is this?", rather than "Is this totally safe?"
- If a contributor's view runs contrary to majority expert opinion, is that clear in our report, questions and casting of any discussion?
- We should consider the impact on public perceptions of risk if we feature emotional pictures and personal testimony.
- Is there an everyday comparison that may make the size of the reported risk easier to understand? (For example, "it's as risky as crossing the road once a day".)
- Would information about comparative risks help the audience to put the risk in context and make properly informed choices? Consider, for example, causing undue worry about safety of the railways could lead audiences to migrate to the roads unaware that the safety risk is many times greater.
- Can the audience be given sources of further information?
Statistics in Debate
Statistical arguments underlying controversial subjects can be complicated and difficult to understand. Statistics may be quoted correctly, but refer to different aspects of a debate, or they may be offered selectively by rival sides to support opposing arguments or to reach different conclusions, either deliberately, or by mistake. For example, determining whether spending on flood defences had gone up or down during the coalition government, depended on which years were being compared.
The presentation of rival statistics can often confuse audiences and it may be insufficient to let them work out who is right or wrong. Sometimes, providing context about the veracity of those figures or methodology behind them may also not be enough when explaining rival statistics. We may need to weigh, interpret and evaluate statistical claims to help audiences navigate the arguments and consider alternative interpretations. We should aim to illuminate the debate and provide audiences with the information they need to understand complex statistical discussions.
So, we should avoid contributors presenting competing statistical claims without any analysis or interpretation about the veracity of those claims. This can be achieved in a number of ways, including intervention from presenters; two-ways with correspondents after interviews or signposting to further analysis such as the 麻豆约拍鈥檚 Reality Check service online or correspondent blogs.
Where statistics are misused or wrong we should challenge and correct them, particularly where they are central to an argument over a controversial issue. Statistical claims made by charities are often used to support a campaign and should be subject to the same degree of scrutiny and scepticism as those made by pressure groups or politicians.
Presenters and programme-makers should be properly briefed about statistical information before they conduct interviews. This should include briefings about statistical information available from independent sources which may challenge a contributor鈥檚 argument.
The UK Statistics Authority has the statutory role to safeguard and promote the production and publication of Official Statistics. It should be noted that where the Authority publishes , it is providing an independent assessment of statistics used in the public domain. We should be alert to correspondence where the Chair is particularly critical.
(See Editorial Guidelines Section 4 Impartiality)
Being clear about Significance
Statistical Significance- How sure are we?
When assessing data that suggests something has an effect we have to decide if the observed differences are 鈥榮tatistically significant鈥, which means they are unlikely to have occurred by chance alone. For example [10], statistical significance can help us understand if the difference between a drug and a placebo is a real clinical effect or not. If the finding is statistically significant we can be more confident that the difference can be explained by something other than chance.
Confidence Intervals / Margin of Error
Statisticians may express significance using 鈥榗onfidence intervals鈥 or 鈥榤argins of error鈥. These tell you how well the sample results from an experiment, a survey or an opinion poll should represent what is actually happening.
For example, an opinion poll may try to predict the results of a general election based on a sample of the voting population. Pollsters will carry out statistical calculations to try to ensure their findings genuinely represent voters鈥 intentions. One cannot say that any opinion poll is 鈥渞ight鈥, because they are all predictions, so they only suggest an outcome. Pollsters work out how close to the 鈥渞ight鈥 figure their results should be by calculating a 鈥榗onfidence interval鈥, better known as a 鈥榤argin of error鈥. For a typical 1000 person poll, the margin of error is plus or minus 3% - so if the headline figure for a party鈥檚 support is 32%, the poll is providing evidence that suggests support is between 29% and 35%. 19 times out of 20 a poll will be accurate to within 3%. i.e. in 1 in 20 the true answer will lie outside the margin of error (though out of those 20 polls, it can鈥檛 tell you which one).
Usually, the smaller the sample, the larger the margin of error and the less likely the result represents the whole group robustly. Results which fall well within the margin may not indicate anything at all. For example [11], we cannot be confident unemployment has actually fallen over a three month period when the level of the fall, 79,000, is within the margin of error of plus or minus 81,000. Conversely, if a change lies outside its margin of error, this is essentially the same as 鈥榮tatistical significance鈥. A statistically insignificant figure is practically meaningless.
We must report the margin of error in graphics if the result falls within the margin to enable audiences to judge the significance of a poll or survey.
For more discussion about surveys, opinion polls, questionnaires, votes and straw polls see:
(See Editorial Guidelines Section 10: Politics, Public Policy and Polls: Opinion Polls, Surveys and Votes and Guidance: Opinion Polls, Surveys, Questionnaires, Votes and Straw Polls)
Practical Significance
However, even if something is statistically significant, that doesn鈥檛 mean it is important to society. Consideration should also be given to whether the statistics are practically significant to our audiences. For example, do the short-term changes in unemployment figures tell us about how the labour market has changed, or do we need to look at the longer term trends?
We should give a balanced view, highlighting any caveats or doubts about significance, taking care not to overstate statistical significance. For example, a fall in the monthly rate of CPI inflation from 0% to minus 0.1% should not be reported as, 鈥楤ritain plunged back into deflation鈥.
However, it is just as important to be clear when there is no change, in say, unemployment, inflation or GDP growth.
Transparency
It will usually be appropriate to report the source of figures to enable people to judge their importance. Where the story is about the statistic, being transparent about its source is vital. However, simply attributing the source of the statistic may be insufficient if the figure is incorrect. So care needs to be taken in assessing its validity.
Audiences may also need to understand how the statistic was originated to assess its importance. This may include understanding study-design; the sample size; representativeness; margins of error; how the data was collected; geographical relevance and time periods.
Where an organisation鈥檚 research is into a topic that has not previously been investigated, consider explaining the methodology or providing links to it. Links to independent analysis should also be considered as well as to the 麻豆约拍鈥檚 Reality Check service.
Corrections
For Editorial Guidelines about correcting mistakes please see Section 3 Accuracy, Correcting Mistakes.
(See Editorial Guidelines Section 3 Accuracy 3.3.28)
Corrections to reports on the News website should follow the News corrections policy. Any of our content may form the basis of material produced in other areas of the 麻豆约拍. It is therefore important to communicate significant corrections made retrospectively to our stories, particularly if they are the result of a formal complaint.
There is further guidance about publishing online corrections in our editorial guidance note on the Removal of 麻豆约拍 Online Content. (See Guidance: Removal of 麻豆约拍 Online Content, Alternatives to Removal, Publishing Corrections
Making Sense of Statistics 鈥 10 golden rules
Look on statistics as your friends, providing you with facts and evidence on which to base your stories. But treat them with caution and respect.
- Let the statistics drive the story and not the other way round. Taking a theory and trying to find statistics that fit it is a recipe for disaster, and one of the biggest causes of inaccuracy and misrepresentation. Make sure that whoever has provided the figures hasn鈥檛 fallen into that trap.
- Too good to be true? If a story looks wrong, it probably is wrong. Don鈥檛 take things at face value, especially if you are looking not at the raw figures, but at how someone else has interpreted them or written them up.
- Context. Look at the background, what is being measured and over what period of time. Could the chosen start and end date have an effect on the findings? Remember that many important social and other changes happen over long period.
- Check your source. Is it likely with a vested interest in interpreting findings in a particular way?
- Look at the methodology. All responsible producers of statistics will tell you how they have been produced, the size of the sample and the margins of error. Beware of people seeking publicity using poor surveys, self-selecting samples or partial selection from someone else鈥檚 data.
- Compare like with like - both over time and between different sources. Just because two sets of statistics look alike, it doesn鈥檛 always mean you can compare them 鈥 methods and samples can differ. Comparisons between different countries are especially difficult.
- Correlation and causation. Just because two facts are sitting alongside each other and look as though they might be connected, do not assume that they are. There may be no connection between them at all, causal or otherwise.
- Big numbers and little numbers. Seen in context, each can look very different. A risk going from 0.01 to 0.02 might be a 鈥榙oubling鈥 but it鈥檚 still a very small risk. A billion pounds of health spending might sound like a lot, but looks less so if it鈥檚 expressed as less than 1% of the total budget. Make sure you look at both the percentage and the raw numbers.
- Don鈥檛 exaggerate. To say the cost of something 鈥榗ould be as high as鈥 a large sum might be strictly true but could be misleading if it鈥檚 a worst case scenario. The central estimate is the most likely to be accurate.
- Averages. The 鈥榤ean鈥 is all the figures added together and divided by the number of figures. It is the most commonly used. The 鈥榤edian鈥 is the middle figure within a range. It often gives a fairer picture. Understand the difference and be clear which you are using.
Never be afraid to ask advice from a statistician about how to understand statistics.
With thanks to: Office for National Statistics, More or Less, Anthony Reuben.
[1] , Making Sense of Statistics Michael Blastland, p10
[2] The Tiger That Isn鈥檛: Seeing Through a World of Numbers (Profile Books) Michael Blastland & Andrew Dilnot, p18/19
[3] The Tiger That Isn鈥檛: Seeing Through a World of Numbers (Profile Books) Michael Blastland & Andrew Dilnot, p59-65
[5] Statistics for policy professionals, Good Practice Team, Government Statistical Service, Jan 2017, p14/15
[6] Statistics for policy professionals, Good Practice Team, Government Statistical Service, Jan 2017, p13
[7] , Making Sense of Statistics Michael Blastland, p13
[8] The Tiger That Isn鈥檛: Seeing Through a World of Numbers (Profile Books) Michael Blastland & Andrew Dilnot, p108-110
Last updated July 2019