Is it good practice for a customer to respond to a customer satisfaction survey in front of the tech who performed the service? originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights.

Answer by Jae Alexis Lee, IT Support Manager and former Customer Service Manager, on Quora:

It's 7:45 on a Tuesday and you're watching the tech from the cable company-let's call him Dan-pack up his tools after hours of watching him trudge back and forth between his truck, the cable box at the road, and your TV. It's not the day that you wanted to take an afternoon off work, but it was the soonest the cable company could get him out there so you bit the bullet and rushed home to meet the 1:00 - 4:00 window that they said he'd arrive in. Of course, Dan showed up dutifully at 3:30. Finally, things look like they're working for the most part, except that Dan said your On Demand wouldn't work for up to 24 hours and then Dan hands you his phone with an app and says "Hey, could you fill out a quick survey about me and then I'll get out of your hair?"

It's late and not only have you taken time off work but you've been putting off dinner until he's done and now you're trying to figure out if he gets a three out of five or a four or if it's really his fault that things took so long. You look up at Dan and he sort of shrugs and says, "I know, they're kind of a pain, but management really gets after us about those, so..."

You're not entirely happy, but you don't want to get the guy in trouble so you quickly skip through with lots of fours and fives before handing him back his phone so that you can get on with figuring out dinner for yourself and he can go home or at least leave yours.

Sound familiar? In a world driven by data, it's increasingly common for companies to submit for feedback on all levels of customer interaction, sometimes when you're still face to face with the associate you're providing feedback on.

But is this the right way to do business?

Yes it is, if you want to have high customer satisfaction scores. No it isn't if you want honest post-service feedback.

I've played with a lot of customer satisfaction data over the years and there are some things that you learn when you work with the data on a regular basis. Let's walk through some observations and then we'll talk about the good and the bad of the interaction we're describing here.

Observation: The sooner you survey clients, the better your customer satisfaction scores.

But why? If I change nothing but the time delay between surveys, why does customer satisfaction decline just because I surveyed later?

Observation: The more time that passes between support interaction and survey, the lower the response rate to surveys is.

Okay, so if I wait longer to survey, fewer people take the survey. Got it ... but it still doesn't explain why my customer satisfaction scores (measured at an organizational level) go down if I wait to conduct my survey.

Observation: People with negative customer experiences are more likely to participate in surveys than people with neutral or positive experiences with support.

And this is where the magic really starts to happen. Dissatisfied customers are more likely to give feedback than satisfied customers. Client willingness to participate in customer satisfaction surveys diminishes over time, but the rate at which it diminishes is different for satisfied, neutral, and dissatisfied customers.

Great, Jae, so why does that matter?

High customer satisfaction scores are inherently useful to businesses. Those scores can be used in marketing aimed at customers ("Buy our product, we have the highest customer satisfaction in the business for X product!") Those scores can also influence investors' confidence in a company's performance and lots of other things besides. It's in a company's best interests to have high scores.

Notice I didn't say it's in the company's best interest to provide good customer service? Remember earlier that we observed that clients receiving the same service give higher scores when surveyed sooner and that dissatisfied customers are more likely to participate in surveys for a greater period of time?

This is where we find ourselves asking a difficult question: If I get two different scores based on when I sample customers, which sampling methodology is the most accurate to describe the level of service my customer has actually received?

Surveying immediately post-interaction:

When you survey customers immediately following their interaction, you get the highest participation rates. Arguably, having the highest participation rate makes it the most accurate sampling.

The risk that you face is that this method of surveying doesn't capture broken promise dissatisfiers. Ever call customer service and have them promise to do something and then you get your bill and they didn't do it? It's that kind of thing that you miss. Likewise with a tech who says, "I fixed it." Does it stay fixed?

So now you have a dilemma. The only way to capture that data is to survey at a point where the customer could have experienced future problems caused by their interaction but you already know that if you push the surveying out your scores will be lower because satisfied customers won't participate as much as dissatisfied customers.

Oh, and one more thing:

There is a fundamental tainting of the survey if the person being evaluated is present when the survey is being completed. Customers are hesitant to provide honest, critical, or negative feedback when the person is standing right there because it can become confrontational. You get the most engagement, but the results are tainted.

So what's the ideal?

In many cases the ideal is to sample same day but in a method that makes clear that the person being evaluated is not present. With an onsite tech, ideal would be a same-day phone call or email follow up from the organization conducting the survey. In other support channels, more immediate follow up is valid as long as it makes it clear that the person being evaluated is not present. You do get slightly lower participation, but you also get more accurate data, which is the real goal of a support organization. Without an honest understanding of what your customers experience it's impossible to identify pain points and remedy them.

While high scores are good for business, if the high scores are artificially inflated your business will suffer from higher levels of customer churn post-support interactions and you won't be able to accurately identify why, which is a good why to go out of business and no one wants that.

This question originally appeared on Quora - the knowledge sharing network where compelling questions are answered by people with unique insights. You can follow Quora on Twitter, Facebook, and Google+. More questions: