Importance of Feedback

We know how important feedback is to an organization that’s trying to improve (which should be all of them).  Continuous improvement means a continuous cycle of change, assessment, and adjustment.  And feedback from our users and community is a critical part of the assessment.  Did a change have no effect?  Did it make things better, or some of each?

So we know that the success of our effort to create a continuously improving GlobalNOC depends a lot on high quality feedback from the clients we serve.  We think this requires both project and daily feedback, and more holistic and comprehensive feedback on a semi-annual basis. But here’s where we ran into problems.  As it turns out, we’re both too big AND too small to make it easy to gather comprehensive reliable feedback.

The Challenge: Too Big AND Too Small


We have too many clients to talk to everyone individually and too few to use common survey methods.


If we worked with just a handful of people, we’d probably skip surveys and just talk to each of them individually every few months.  It wouldn’t be ideal from an anonymity perspective, but we would get very direct feedback.  We could easily track improvement based on whether a particular individual’s opinion changed over time. But, we serve 22 different organizations, and will will frequently interact with several different people at each organization.  This means we have something like 100 individuals who we work with on a daily basis.  This is just too many to do on an individual basis.

On the other hand, if we worked with thousands of people, we could use a common survey to track data-based improvement over time.  We could reach many and track aggregate changes in sentiment.  But, in our experience, we get about 30 responses to surveys of our clients.  With a sample size this small, changes in the data could very easily come from small changes in who participates in the survey rather than from actual changes in what people think.  What would appear to be a trend in the data would really just be noise.

So, we have too many clients to talk to everyone individually and too few to use common survey methods. How do we bridge this gap?

Survey + Individual Tracking = A Goldilocks Path

As it turns out, there is a way to get the best of both methods. Being at IU, we’re lucky enough to have access to a lot of resources.  One of them is the IU Center for Survey Research (https://csr.indiana.edu/).  The breakthrough for us came from a discussion with the director of the center, Ashley Clark.  If we could use a survey, but track individuals over time, we could have the scaling benefits of a survey with the reliability that comes from tracking individual sentiment.

This means two things.  The first one is straightforward: we needed to adjust our survey to gather identifying information about individuals.  Second, we needed to change how we measure changes.  Rather than looking at the mean or median response, we need to look at the number of respondents with a change in response.  For example, we can say how many respondents gave a more favorable response than they did previously.  This count becomes the new measure.

The NEW Challenge: Anonymity

This works great, but has one obvious challenge: anonymity!  If we’re tracking individuals with our survey, that could severely impact how candid people are willing to be.  The compromise we made was to move from an anonymous survey to an anonymized one.  We asked those who responded to provide identifying information, with an explanation of what we would do with it, and why it was needed.  But we sent all the response data to a trusted and disinterested person to translate each person’s information to a random but permanent identifier. Nobody involved in reviewing or acting on the data would see any person’s information.

As an aside, it would be really nice if the survey systems supported this anonymization function directly.  The system we use (Qualtrics) doesn’t have this capability, and it makes this much more cumbersome.  It would help with level of trust and ease tremendously if the survey system itself could maintain the hidden mapping of name to anonymous identifier.

Still, we’ve now run with this once, and it seems to work very well.  We had the vast majority of our respondents provide names, showing that they trust our anonymization.  And the confidence we have in the data is high. We’re excited to find a path, and start focusing on what our data says, instead of how we measure it!

Future: Net Improvement Score?

Going forward, this will open up new options for tracking.  For instance, we’re considering a variation of the net promoter score we call a net improvement score.  For net promoter score, the question asked is “How likely would you be to recommend this company/service/product to others?” The calculation is based on the % of respondents who answer with a 9 or a 10, minus the % of respondents who answer with a 6 or lower.  The result is a score from -100% to 100%.  Carrying this over, a useful measure for us might be to calculate the % of respondents who have given a more favorable response to a question mins the % who give a less favorable response.

Director, Engineering