Image by Ismail Salad Hajji dirir


Backed by a team of experienced professionals, our strategic services meet the needs of all types and sizes of clients - from small startups to large development actors - and deliver lasting changes with measurable growth. Please get in touch with us today to learn how PEA consultancy can partner and collaborate.


The Evaluation Contexts.

The development contexts continue to respond to transitory needs with various layers of integration. Acute disparities in gender and settlement statuses call for focused programming that is founded on accurate data and evidence. Given that implementing partners in these contexts are lucking in localized robust Evaluation and Learning capacity; expert services are needed to guarantee proper design, implementation and learning while responding to the requirements of building resilience and sustainability across communities.

Why Evaluate with PEA.

Evaluations aim at identifying and quantifying the value, merit and/or worth of the activities, projects and programmes that you (our clients, partners and collaborators) implement. In defining the boundaries of Evaluation, PEA consultancy is invested in going beyond the usual rubrics of What is… by accommodating the So what and What now questions while gathering data on activities, projects and programmes designed and implemented by our partners and collaborators to create and sustain real change in the communities we work with.
Every evaluation exercises we on-board at PEA Consulting is grounded on the workings towards our goal of “making data and evidence play a critical role in changing lives”.  Ultimately, we work toward supporting stakeholder learning, innovation and growth strategies; presenting robust and rigorous methodologies bounded by defined questions that answer to the specifics of requirements as a client, partner and collaborator.

Our Approach to Evaluations.

Development programming defines actions that are responsive to the gaps and needs of the communities we work with. This is our overarching hypothesis. By responding to calls for offering technical assistance in evaluation, either in conflict, relief, transition and development contexts, PEA always believe that our clients, partners and/or collaborators have a defined scope of action that will be using the findings  of the proposed evaluation; there is need to understand the merit/worth/value that will be produced by the evaluation, which quiet essentially forms the foundation of the evaluation; there are adequate resources to undertake a sufficiently comprehensive and rigorous evaluation, including the availability of existing, good quality data and  additional time and money to collect more; and that the proposed evaluation is clearly aligned to the strategies and priorities of the client/partner/collaborator and her partners.

The above premise is desired because it allows PEA to validate the need to provide technical assistance in documenting the milestones of the referenced intervention; inform adaptations of needs while informing scale-ups and mirror actions in regions that are homogeneous to the current programme locations in terms of needs.

Clarity in the above also provide an environment so conducive to an approach we at PEA find very effective in facilitating evaluations. During the inception phase of our engagements, the initial proposal is always to form an Evaluation Management Group (EMG), consisting of our evaluators and the project implementation teams. These teams allow us latitude in defining the boundaries of evaluations and refining the key evaluation questions critical to optimal understanding of the project contexts.

The conventional OECD-DAC criteria of evaluation facilitate indexing of every stakeholder involved through the implementation phases of a project or programme as key sources of evidence that would thereafter support our efforts in documenting the expected and unexpected immediate, intermediate and ultimate outcomes of the projects/programmes under reviews and making the judgement on the good and bad of the interventions we evaluate.  Ultimately, engaging the EMG has always allowed the project teams own the methodology and exhaustively define the evaluation questions –given their understanding of the project geographies and reward our evaluations with quality data and perspectives critical to estimating and reporting on the value/worth/merit of the projects/programme.

The key steps we engage in when delivering an evaluation are:

(i) Framing the boundaries of an evaluation:

PEA prioritizes understanding the needs of our clients, partners and/or collaborator. Even though evidence from evaluations can be synthesized to fit a myriad of needs; one that is the basis of all is learning. Our evaluation models therefore cover what has been adopted as conventional through the rubrics of OECD-DAC. We however, thrive in using the frames we mentioned earlier on; So What? and Now What? with integrations of beneficiary feedback and our expert judgement in validating theories of change. Ultimately, we work to:

  1. Measure and report on the extent to which intervention objectives and design respond to the respective stakeholder needs, policies, and priorities;

  2. Measure and report on the compatibility of an intervention with other interventions across the implementation geographies and the degree to which the project designs and implementation attain internal coherence;

  3. Measure and report on the sensitivity of interventions to the demographics in gender and disability across implementation geographies; 

  4. Measure and report on the extent to which intervention achieve, or are expected to achieve objectives, results; including any differential results across groups;

  5. Measure and report on the extent to which interventions deliver, or are likely to deliver results in an economic and timely way;

  6. Measure and report the extent to which intervention generate or are expected to generate significant positive or negative, intended or unintended, higher-level effects; and,

  7. Measure and report on the longevity of the net benefits of interventions while highlighting aspects of the interventions that are candidate for replication or scale up.

(ii) Defining the key evaluation questions (KEQs):

PEA believe that rigor in evidence gathered through evaluations are defined by the depth of Evaluation questions. In our Evaluation exercises, we reference the below key evaluation questions that generally cover the OECD-DAC criteria with components on equity and human rights that allow for addition of questions on gender and disability sensitivity in programme implementation. As referenced in our participatory and consultative approach to evaluation, our propositions are always formed on the macro questions and at inception, involve the EMG in defining the lower levels questions that build into the evaluation matrix. Our reasoning is always to measure and report on the performance of a project or programme. We ask the following macro-questions;

  1. How relevant is the project or programme?

  2. How well is the project or programme being implemented?

  3. How effective was the project or programme, in terms of of the intended outcomes?

Ultimately, PEA will always use the lower level evaluation questions as would either be re-defined in the client ToRs or discussed within the EMG. These questions always support elaborate and extensive transcription of secondary data (monitoring data) and/or when necessary support collection of additional primary data through surveys, key informant interviews (KIIs) and focused group discussions (FGDs).

To support the frames of evaluation, especially those linked to constructing attribution along project result chains; PEA will always have:

  • Descriptive questions that help the evaluation ask about how things are and what has happened, including descriptions of the initial situation and how it has changed, the activities of the intervention and other related programmes or policies, the contexts in terms of participant characteristic and the implementation environment;

  • Causal questions that ask whether or not, and to what extent, observed changes are due to the intervention being evaluated rather than to other factors, including other programmes and/or policies; and,

  • Evaluative questions that ask about the overall conclusion as to whether this programme can be considered a success, an improvement or the best option in context, referenced on sector vulnerabilities and priorities.

(iii) Define Impact:

A key judgement and reporting point in Evaluations is on the impact of an intervention. We at PEA accommodate the premise that impact can occur later than the prescribed programme life, and as a result of the intermediate outcomes.  Our approach therefore rests in measuring the achievement of the intermediate outcomes of an intervention with a causation model that they may occur before the end of a project or programme and contribute to the intended final impact. Through extensive consultations and discussions with the EMG, definition of success becomes a priority while making evaluative judgments. The intention of this is always driven by the need to qualify what success (merit, value or worth) mean in an evaluation and proceed to adopt specific rubrics that define different levels of performance (or standards) for each of the evaluation criterion, defining what evidence will be gathered and how it will be synthesized to reach defensible conclusions about the worth of the intervention.  At close of an evaluation, we are always able to draw equilibriums between the positive and negative impacts before qualifying any project as a success or failure.

(iv) Use of the project theory of change to validate causation or attribution:

At PEA, we always advise our clients and partners on the importance of going beyond investigating linkages between activities and impact. Our proposed best practice is  to examine the causal chain between activities, outputs, intermediate outcomes and impact. Our position in designing and conducting evaluations is that A ‘theory of change’ that explains how activities are understood to produce a series of results that contribute to achieving the ultimate intended impacts, is helpful in guiding causal attribution. We marvel at programmes with defined theories of change and support clients  whose project are without, in setting up one pre-evaluation.

Particularly, theory of change supports our evaluation initiatives by identifying:

  1. Specific evaluation questions, especially in relation to those elements of the theory of change for which there is no substantive evidence yet;

  2. Relevant variables that should be included in data collection;

  3. Intermediate outcomes that can be used as markers of success in situations where the impacts of interest will not occur during the time frame of the evaluation;

  4. Aspects of implementation that should be examined; and,

  5. Potentially relevant contextual factors that should be addressed in data collection and in analysis, to look for patterns.

At the close of all, we recognize that evaluations may confirm the theory of change or may suggest refinements based on analysis of evidence. We measure success along the causal chain and, if necessary, examine alternative causal paths. For example, an incident that reports failure to achieve intermediate results always make strong indications of implementation failure; but failure to achieve the final intended impacts will always strongly be due to theory failure rather than implementation failure. Such judgement calls have important implications on the recommendations that come out of our evaluations. In cases of implementation failure, we always recommend actions to improve the quality of implementation; and in cases of theory failure, we always recommend a rethink of the whole strategy for achieving impact.

We are deliberate on Evaluative thinking.