As development practitioners and researchers, we need to think carefully about the types of Monitoring, Learning and Evaluation we use with the people and communities we support. The measures we use can cause conflict and distress – especially for underrepresented community members. It can undermine trust and engagement, and produce unreliable data.
A few years ago, I was talking with some researchers about measuring outcomes of inclusion of underrepresented communities. We were discussing different types of standardised methodologies, and typical Monitoring, Learning and Evaluation indicators for targeted groups of under-represented people (women, people with disabilities and people with diverse sexual orientations, gender identities, expressions and sex characteristics)
I have been increasingly feeling uneasy about promoting a strengths-based practice, while I was constantly focusing on the programs and what was wrong in relation to inclusion and participation.
The people I was talking to thought me idealistic and thought that the research and monitoring, learning and evaluation were much more important than adopting a strengths-based approach.
As someone who strongly advocates for community participation at every step of development programmes, I decided to ask for feedback from stakeholders in a project I was leading.
In order to help measure the impact we decided to use a survey to explore how inclusive stakeholders in two communities we were working in – focusing on the inclusion of women and people with disabilities in their local communities within the scope of the project.
The survey had 20 statements (with a scale of strongly disagree to strongly agree): some of which were positive. e.g., “If we want people with disabilities to respect us, we must treat them with respect” and others were quite negative, “we don’t employ people with disabilities because we don’t trust they have the capacity to participate at this level’.
The group reacted really badly to the negative focus of many of the questions. All the participants faced major challenges within the development system and specific thematic area we were working in (WASH) and in their life generally and felt that by using the survey we were judging them. They were also worried about how the information would be used (e.g., would it be passed on to our donors) and said it reminded some of them about negative experiences with aid workers previously.
Fortunately, before asking them to complete the survey, we had explained that we wanted their feedback about the survey and had taken steps to ensure confidentiality. After they had placed their surveys in an envelope (so we didn’t see them) we asked what they thought about the survey. When it became clear they had reacted quite negatively and we had discussed it for a while, we invited them to take their surveys back and to destroy them.
If we had not returned the surveys and not set it up as carefully as we had, the trust we had built would have been undermined and it would have been harder to engage them. As it was, the experience helped build trust and engagement because, by returning the surveys without having seen their responses, we clearly demonstrated that we had listened to them, trusted their judgement, and valued their insights.
Such surveys can also produce quite unreliable data. Some years back a community practitioner told me about a researcher who had given a group of women she was working with an anonymous survey to complete. The practitioner thought some of the questions were quite personal and intrusive so after the researcher left, she asked the women whether they were worried about answering the questions. They replied, “No, we just lied.”
The measures we use need to be consistent with our approach (if we are strengths-based we need to find or develop strengths-based measures), be respectful and be appropriate to our audience. It is not OK for us to think that research and evaluation are more important than the people we work with.