Main content

The expectations and excitement levels among our research and learning team here at Βι¶ΉΤΌΕΔ Media Action were high. The has published midline results online last month. Had the company behind the trial, , cracked the Holy Grail and isolated the impact of media on behaviour change? Could these results from their trial in Burkino Faso begin to answer the questions of attribution which trouble the impact evaluation of health communications while resonating through the halls of our donors? Our team couldn’t wait to jump into the detail of the data and explore the initial results.  

But while DMI’s conclusion of the findings is exciting – that this is “the first randomised controlled trial to demonstrate that mass media can cause behaviour change”– their three-page report and one-page summary left us wanting more.

While neatly summarised for a policy level audience, we couldn’t help but ask, where are the technical appendices, where is the data, where are the standard errors and, ultimately, what does all this mean for us as a sector?  This is why we contacted DMI and their research partners at the London School of Hygiene and Tropical Medicine. We are pleased to learn a more detailed technical report is being finalised and to be published in the upcoming months.

A rare condition for a unique study

In about the study, they state that there are few countries and media environments around the world that would provide the necessary conditions for this type of study. A principal issue is ‘contamination’, where some people who are not supposed to hear the broadcast (the ‘control group’) are, in fact, exposed to it.  

The limited number of suitable countries to work in is in keeping with findings on impact evaluation approaches in health communication from scholars in the sector, such as Jane Bertrand and Robert Hornik. They have underscored the need for alternative study designs to randomised trials as the optimal means of evaluating full coverage mass-media programming. It is largely seen not to be viable to assign subjects randomly to treatment groups when the intervention consists of a full coverage campaign aiming to reach the largest possible audience. 

This highlights the uniqueness of their Burkino Faso study and why the results – positive or negative –have such potentially large implications for our sector’s evidence base.

Up until now, the media for development sector has focused on less robust evaluation methods to explore how mass media contributes to improved knowledge and behaviour. And while all evaluations, qualitative and quantitative, build towards a more informed answer, the Burkina Faso trial is pushing the envelope by applying an, in our sector, untried research methodology that should give us more conclusive results.

But to be able to learn the most from DMI’s trial in Burkina Faso and interpret its results correctly, it is vital we get more information on the following aspects, which we hope would be addressed in the upcoming publication.

Theory of change

An important issue where we would like to gain more insight is the Theory of Change that underlies DMI’s intervention in Burkino Faso.

How are their short, high-intensity broadcasts expected to impact behaviour, and more importantly lower child mortality rates?

From the list of outcomes targeted it appears that the trial focused on curative, one-off behaviours and less on those that are underpinned by social norms. People will have more incentive to alter their behaviour if their child is sick, but will be less inclined to change if they feel their family or community would disapprove.

It would also be interesting to learn more about the quality and nature of the programmes: how similar or different are they, what are the editorial values, has any assessment of quality been done? Knowing the Theory of Change and relevant programme information would help us to look beyond the results and understand not only if we see impact, but why.

Dose response

Similar questions apply to the presented dose response results. ‘Dose response’ refers to the period of time each message was broadcasted and the possible relation this has with behaviour outcomes, ie do behaviours that aired for more weeks show more change?

Only a selection of outcomes is taken to present the effect of dose response and a diverse set at that. This affects the interpretability of the results.

Another way of presenting the dose response would have been to group those behaviours that are similar together. It is safe to say, that it is easier for people to take Oral Rehydration Solution (ORS) against diarrhoea than to install a latrine in their house; we expect to see higher differences in one outcome than the other.

Breaking the dose response down according to type of behaviour could have resolved that and provided more insight. Though these midline results give an indication, based on what is currently presented it is difficult to say what the real influence of dose response is.

Research design

Without further technical insight or Theory of Change to turn to, the midline report leaves us with some questions about the study design.

These pertain for instance to technical issues like confidence intervals and the powering of the samples. Perhaps more pressing though is to what extent the control and intervention zones are comparable on various socio-economic, demographic, cultural and/or geographical factors? Based on the presented data, baseline equivalence is questionable.

Reported differences on behaviour outcomes could therefore be caused not by exposure, but by important underlying characteristics of the selected areas. The research methodology of RCTs should balance such differences, but when a relatively small number of areas are selected, this is unlikely to happen. Adjusting only for distance to a health centre, ie keeping its effect constant, is then insufficient to assess the actual impact of the intervention.

A Theory of Change could provide an important rationale for determining which characteristics to control for when analysing the data. So we look forward to seeing the endline results where adjustment for possible confounders is said to take place.

Statistical detail

From a research perspective, assessing the impact of an intervention becomes complicated when a succinct summary leaves out certain statistical information; sample sizes for outcomes which have been measured at the cluster level, and the accompanying standard errors are important for external researchers to be able to correctly interpret results.

A more technical report should provide that information to provide transparency about the study. It would also help us understand why strong relationships between the intervention and outcomes are reported, while p-values (probability values) in many cases are not significant. Could differences be the result of chance or is the study design making it difficult to detect significant change?

Interpretation and going forward

So far, the results seem to be mixed. For certain one-off behaviours, such as seeking treatment for diarrhoea at a clinic, there appears to be an impact, but for many others, and especially those like exclusive breastfeeding which are underpinned by social norms, the intervention does not appear to have had an effect. Though again, this is interpretation without technical information or qualitative data to inform us further.

The fact that these are midline results may also be a cause for the mixed results. It will take time to change people’s attitudes and perhaps three years is just too short. Endline results may be more conclusive. We would encourage future initiatives to evaluate interventions past their running time. After broadcasts have finished, do people fall back into old behaviours or has the change been sustainable? Is there any enduring impact we can bring about with mass communications?

A final note of caution is that it is important to realise this RCT is just one study, conducted under difficult, imperfect conditions. Even if we were able to conclusively interpret the current results, one swallow does not make a summer. We need to contextualise the results in the broader field of what we know media can and cannot do. Some smaller scale links that we at Βι¶ΉΤΌΕΔ Media Action are trying to establish are explored in a few of our recent research papers, and

This is an exciting moment for our sector. Our appetites have been whetted. We look forward to learning and understanding more from DMI and the London School of Hygiene and Tropical Medicine on the findings from the trial – and would love to be part of the conversation as the findings move from midline to endline over the next one and a half years.

It is a great opportunity for us all to learn about the impact of these findings of this unique trial which will affect us all.

 

For more information on the DMI Burkina Faso trial, contact Will Snell, Director of Development, Development Media International, +44 (0)20 3058 1631,  will.snell@developmentmedia.net

Related links

Follow Βι¶ΉΤΌΕΔ Media Action on and

Tagged with:

More Posts

Previous

South Sudan: making PSAs that matter

Next

A step backwards: media in today's Iraq