Close

27th January 2014

Practice what you preach

Keystone spends most of its time helping organizations understand what people think of them: gathering perceptual data and using it to improve performance. We have a whole methodology dedicated to getting organizations to listen and respond to those affected by their work.

The relationships our clients hold with their constituents are often different to those we hold with our clients, but at Keystone we recognize that systematically gathering feedback from our clients is valuable and can help us improve. After all, like the many NGOs, CSOs, investors and agencies we work with, we too aim to have a positive impact on those around us. Moreover, we employ many of the techniques we suggest our clients use:

  • We survey clients systematically, after key touch points
  • We ask the same standardized questions so we can do cross-time comparisons
  • We disaggregate our data to better understand it and use Net Promoter Analysis to present it
  • We analyze the data collectively, and reflect on it
  • We report back to clients what we heard and talk with them to make sense of the data to improve our interpretation
  • We make changes and repeat the process to continually improve the quality of our services
  • We regularly publish the results

NPSNet promoter analysis splits respondents on a 0-10 scale into three groups: Detractors, Passives and Promoters. A single weighted performance score, or Net Promoter Score (NP Score), can then be calculated.

We are interested in knowing: (a) whether our clients act as a result of our work; (b) how they see the quality of our work; and (c) how we compare against others that might use instead of us.

When asked on a scale of 0-10, how likely it is that our clients will take action based on the data findings we provide, our average score over the last two years is 9.5, with an NP Score of 89.

Second, we ask what they think about the overall standard of our work. Again, the results are positive, with an average NP score over the last two years of 46. Comments received in this area include:

  • “The staff was great and responsive to our needs”
  • “Updates were frequent and useful”
  • “Managed process well. High quality report. Well presented”

Third, compared to other planning, monitoring or evaluative activities, we are interested in understanding how useful our services are. Here we have received an average NP Score of 45 for the past two years. It would be useful to have external benchmarks for these questions, so we could better understand how we fare compared to other similar organizations.

Not everything is how we would like it to be, however. One benefit of asking the same questions consistently is the ability to track performance over time. Our average NP Score for how well we understand our clients’ organization and their work has steadily been declining, and this is something we want to address.

NPS graphWhile the figure shows that there has been no substantial increase in detractors (when represented as a mean rating the drop is only from 8.5 to 7.8), we still want to reverse the trend and make the passives promoters again.

By disaggregating the data into various client groups, we can see some interesting trends. Our bespoke consultancy clients are still providing good NP scores (average 38 in 2013), however those using our off-the-shelf benchmark surveys, such as the Development Partnership Survey average an NP Score of 0. Given the nature of the off-the-shelf products – low cost, widely applicable fast response products – perhaps this trend is unsurprising, however it is still something we want to improve.

Armed with this data, we are changing our benchmark survey offerings, to allow for much more customization based on individual client needs. This includes more potential for internal data disaggregation, by region or program area for example, as well as additional custom questions. Also, given the depth of our comparative database, we can now offer organizations more targeted benchmarks – benchmarks to only large organizations for example, or to only faith-based organizations or those working in similar areas.

Through our discussions with both previous clients as well as potential ones (an important step in making sense of what the data is saying), these changes seem to resonate with their needs, and allow us to target services to their specific organizational needs. Proof of our success in these adaptations will be revealed next time we ask the question, where once again we will track our scores, and use them to reflect on how these improvements have impacted clients’ experience.

Like the organizations we work with, we should not be scared of criticism, but we should respond to it appropriately, and use data to continually strive for better. That is how all organizations, not just Keystone, can maximize impact for those it aims to help.

Kai

Kai Hopkins

is a Senior Consultant at Keystone Accountability and delivers on all client projects from Comparative Constituent Feedback projects to bespoke consultancy. Kai has an undergraduate degree in Politics and an MBA.