3. From M&E to monitoring and learning

3.2 How to monitor – collecting and managing data

Part 2 looks at the different tools and approaches for collecting and managing the information needed. This is broken down into two types of method:

  1. Methods to be used in real time for managers and practitioners to collect data throughout the process: these generally relate to output, uptake and more immediate outcome measures, as they tend to be more tangible and observable.
  2. Methods more oriented towards the more intermediate and longer-term outcome measures: these require more time and are generally used retrospectively.

Real-time data collection methods

Generally, if the intervention is very brief and engagement with individuals is very limited (e.g. through the broadcast media), the data for collection will be thin and may need to be supplemented with data from discrete studies. The deeper the engagement, the more in-depth will be the information you can collect in real time – and the more important these methods will become. Here are some of the methods.

Journals and logs

One of the most basic ways of capturing information is by keeping a journal of observations, trends, quotes, reflections and other information. Logs are usually quantitative and simple – number of people attending an event or airtime during a radio show. Journals are more descriptive, and either structured with a specific format and fields to be filled in (such as progress against predefined measures or changes in contextual factors) or unstructured, allowing the author to record comments. They can be notebooks carried by team members or electronic (website, database, intranet, email or even mobile apps).

Examples include ODI’s ‘M&E log’, which all staff members can contribute to by sending an email to a particular inbox, which then stores the information on the institute’s intranet. The unstructured approach makes it very easy for staff to submit evidence of uptake of research outputs and feedback from audiences but does require effort to maintain, systematise and use.

Examples include ODI’s ‘M&E log’, which all staff members can contribute to by sending an email to a particular inbox, which then stores the information on the institute’s intranet. The unstructured approach makes it very easy for staff to submit evidence of uptake of research outputs and feedback from audiences but does require effort to maintain, systematise and use.

The Accountability in Tanzania programme collects journals from its 20-plus partners, each reporting on the outcomes of up to 8 different actors, to understand their influence at national and local level in Tanzania. It asks for journals to be submitted only twice a year and has developed a database to organise the information, enable analysis and identify patterns.

After action review

The US Army developed after action reviews as a technique for debriefing on a tactical manoeuvre. They have been adapted to organisational use and are commonly applied as part of a learning system. An after action review is typically used after an activity has taken place, bringing together the team to reflect on three simple questions: what was supposed to happen, what actually happened and why were there differences? They are designed to be quick and light – not requiring a facilitator, an agenda or too much time – and collect any information that might otherwise be forgotten and lost once the event passes. Therefore, they should be included as part of the activity itself and scheduled in right at the end. Like a journal, notes from the meeting should be filed away and brought out at the next reflection meeting.

A variation on the after action reviews is an ‘intense period debrief’, developed by the Innovation Network in the US as a method for advocacy evaluations. The richest moments for data collection in any policy-influencing intervention are likely to be the busiest – such as when mobilising inputs into a parliamentary committee hearing or responding to media attention. Data collection methods should adapt to this. The intense period debrief unpacks exactly what happened in that busy time, who was involved, what strategies were employed, how the intervention adapted and what the outcomes were, without interrupting the momentum of the intervention.

Surveys

Surveys can be useful for obtaining stakeholder feedback, particularly when interventions have limited engagement with audiences. They are most appropriate for collecting data on uptake measures, since this is about reactions to and uses of intervention outputs. Surveys can also be used for outcome measures, but timing has to be considered, since outcomes take time to emerge. If a survey template is set up prior to an intervention, it can be relatively quick and easy to roll out after each event or engagement. This could be automated with an online service like SurveyMonkey – you just provide the link to your audiences.

Web analytics

Since more and more interventions used in influencing policy are web-based, it is important to have a strategy for collecting information about use of web services: what is being seen, shared and downloaded, when and by whom. Website analytics are generally easy to set up, with services like Google Analytics providing free data collection and management.

Nick Scott at ODI offers good advice on tracking a range of statistics, including webpages, publication downloads, search engine positioning, RSS feeds, Twitter, Facebook, mailing lists, blogs and media mentions. For each of these there are specific online tools recommended for collecting data. Once set up, these data services will run in the background and data can be collected and analysed when needed. Nick also describes the use of dashboards for compiling and visualising the data from multiple sources for analysis and use in decision-making.

Web analytics need to be used modestly and cautiously, however. They will never be able to replace the other data collection methods mentioned above; for example, they will never tell you exactly who is reading your work, who they work for, what their job is and what, if anything, they will do with it after they have read it.

Retrospective study methods

The real-time methods are unlikely to provide much data and insights at the outcome level. For this, you will need either to set aside time and undertake your own retrospective study or to commission a specialist to investigate for you. Either way, the following methods and approaches are useful to consider, as they are oriented towards the kinds of outcomes discussed in Chapter 2 and set out in Table 2.

Stories of change

A story of change is a case study method that investigates the contribution of an intervention to specific outcomes. It does not report on activities and outputs but rather on the mechanisms and pathways by which the intervention was able to influence a particular change, such as a change in government policy, the establishment of a new programme or the enactment of new legislation. The change described can be an expected change that the intervention was targeting or an unexpected change – which itself can be positive or negative with respect to the original objective. Stories can also describe how an intervention has failed to influence an expected change, in which case they analyse the possible reasons why.

There are three major steps to writing a story of change:

Episode studies

Another case study method relates to episode studies, which look at the different mechanisms leading to a change. These are not systematic assessments of how much each factor has contributed to the change but they are very labour- and evidence-intensive. The steps are the same as for stories of change except that the evidence-gathering stage investigates any and all factors influencing the change, including but not limited to the intervention. This generally requires access to those close to the decision-making around the change in question. The advantage of this approach is that it can highlight the relative contribution of the intervention to the change in relation to other influencing factors and actors.

Bellwether interviews

The bellwether method was developed by the Harvard Family Research Project to determine where a policy issue or proposed change is positioned on the policy agenda, the perceptions of key actors and the level of traction it has among decision-makers. It involves interviewing influential people, or ‘bellwethers’, including elected representatives, public officials, the media, funders, researchers/think-tanks, the business community, civil society or advocates.

The method is similar to other structured interview techniques but with two important differences. First, at least half of the sample should have no special or direct link to the policy issue at hand. This will increase the likelihood that any knowledge will owe to the intervention rather than personal involvement. Second, bellwethers should be informed of the general purpose and topic of the interview but not be given specific details until the interview itself. This will ensure their responses are authentic and unprompted. The interview should start by being general and gradually become more specific.

Box 12: Sample bellwether questions (from Coffman and Reid, 2007)

  1. Currently, what three issues do you think are at the top of the [state/federal/local] policy agenda?
  2. How familiar are you with [the policy of interest]?
  3. What individuals, constituencies or groups do you see as the main advocates for [the policy]? Who do you see as the main opponents?
  4. Considering the current educational, social and political context, do you think [the policy] should be adopted now or in the near future?
  5. Looking ahead, how likely do you think it is that [the policy] will be adopted in the next five years?
  6. If [the policy] is adopted, what issues do you think the state needs to be most concerned about related to its implementation?
  7. Currently, what three issues do you think are at the top of the [state/federal/local] policy agenda?
  8. How familiar are you with [the policy of interest]?
  9. What individuals, constituencies or groups do you see as the main advocates for [the policy]? Who do you see as the main opponents?
  10. Considering the current educational, social and political context, do you think [the policy] should be adopted now or in the near future?
  11. Looking ahead, how likely do you think it is that [the policy] will be adopted in the next five years?
  12. If [the policy] is adopted, what issues do you think the state needs to be most concerned about related to its implementation?

System or relational mapping

When the outcomes desired are related to how a system operates – for example building relationships between actors, shifting power dynamics, targeting the environment around which a policy is developed or improving information access or flows – then it can be useful to map that system to see how the different parts fit together. The data required for this are relational (i.e. to do with relationships, connections and interactions) rather than attributes (i.e. to do with facts, opinions, behaviour, attitude). They are usually collected through standard techniques such as surveys, interviews and secondary sources. By asking about the existence and nature of relationships between actors, a very different picture emerges of what the system looks like. This can be easily turned into a visual map to help identify patterns and new opportunities for influencing.

One particular method is NetMap, an interactive approach that allows interviewees to use physical objects and coloured pens to describe relationships between actors and their relative influence on a particular issue. It can be a useful variation if the aim is to gain perspectives across a system or network.

Another variation is influence mapping, which asks specifically about the influence one actor has on the opinions and actions of another. An influence map can show the primary and secondary (and if needed tertiary) influences on a key decision-maker. This can help in planning or adapting influencing strategies or identifying possible individuals to consult for a bellwether interview.