At PSOW, we are keen to learn how we can improve the experience of our service users. We have five Service Standards committing us to excellent customer service. Below, we discuss the telephone survey of our complainants that we undertook during 2019/20 to find out how people rate our service and how we performed against our Service Standards. We thank all the respondents for their time and insights.
In the past, we gathered feedback from our service users mainly through a survey located on our website. Service users would be invited to complete the survey by following a link in all case-related correspondence. However, we found that the response rate to this survey was generally low – for example, during in 2019/20 it was completed 75 times. The pool of respondents was also entirely self-selecting. This meant that we had no way of ensuring that the survey results were representative of the experience of the broader pool of our complainants.
For these reasons, we decided to undertake additional customer satisfaction research through a targeted telephone survey of a representative sample of our complainants. The external supplier we commissioned for this work (Beaufort Research) conducted 204 interviews with our complainants whose cases were closed between April and December 2019.
The sample was constructed to enable us to identify trends in our performance based on:
The sample of respondents was constructed to be broadly proportionate. However, we weighted the sample to include a little higher than proportionate representation of complainants whose cases were closed at the investigation stage. We did this to reflect the fact that the majority of the resources of our office is directed towards investigations.
The survey results pointed to several themes in the experience of our customers.
First, we noted a very strong correlation between the level of satisfaction with our customer service and the level of satisfaction with the case outcome. People satisfied with the outcome of their case assessed our performance much more favourably than those who were not satisfied. The differences in scores between these two categories were so significant, that, in the interest of fairness, throughout this post we report both scores side by side.
Second, we found that our service users were generally more satisfied with our performance in handling cases about public bodies than about alleged breaches of the Code of Conduct. Some aspects of our service which scored noticeably lower for the latter group of cases concerned how well we explained our role and process; how well we understood a complaint; whether we considered the evidence thoroughly and whether we were impartial.
Third, in general, people assessed our service more positively in relation to the cases closed by us at the investigation stage rather than the assessment stage. There was, however, one exception: our score on timeliness was higher for the assessment cases.
Lastly, our performance on cases related to health scored higher than other cases in relation to staff politeness and quality of communication – but lower in relation to how well we explained our role and process; how well we understood a complaint and whether we considered evidence thoroughly and impartially.
Overall, 57% of all respondents agreed that they were very or fairly satisfied with the level of customer service that they have received from PSOW. This score was much higher for those who were satisfied with the outcome (98%).
In designing the survey, we aimed to ensure that the questions allow us to rate our service against our Service Standards. The questions covered 4 of our 5 Standards.
We did not ask questions in respect of our fifth Standard – “We will operate in a transparent way”. This was because that Standard is a little different – it focuses less on the journey of our complainants and more on their general perceptions of our transparency as an organisation. We also wanted to avoid discouraging the potential participants by making the survey too long. However, next year, we will consider expanding the questionnaire to also capture perceptions of our transparency among our service users.
Overall, the aggregate score on Service Standard 1 was 65% for all respondents, and 77% for those satisfied with the outcome.
Overall, the aggregate score on Service Standard 2 was 64% for all respondents, and 82% for those satisfied with the outcome.
Overall, the aggregate score on Service Standard 3 was 57% for all respondents, and 89% for those satisfied with the outcome.
When asked about ways for us to improve our service, the most commonly mentioned improvements mentioned by respondents were:
Although it was good to see that we are performing well in many respects (for example, in making our service accessible), customer satisfaction research is most useful as a foundation for learning and improvement.
Clearly, some of the less favourable opinions about our service were strongly linked to the outcome of the case. This is perhaps inevitable, given that many of our service users feel very strongly about their cases. Also, many have had adverse experiences in accessing complaints processes before reaching us and have little trust in the fairness of the process.
There is not much we can do to improve our performance in this respect, as the outcome of each case will always depend ultimately on the evidence presented to us. However, the results of the survey pointed to several areas on which we can focus, moving forward.
Overall, although a large majority of people felt that our staff were courteous and respectful, it appeared that our service users would appreciate more consistent contact and updates on their cases.
We were surprised by the low score in response to the question about availability of reasonable adjustments. Questions about reasonable adjustments are included in hard copy and online complaint forms and in letters acknowledging the receipt of new complaints. It may be that respondents not needing reasonable adjustments were less likely to remember being asked about this. However, we realise that we need to do more to raise awareness of this option. During 2020/21, we will remind our staff to ask our complainants about this more proactively throughout the process of handling their complaint. We will also seek to collect better data about the number of reasonable adjustments requests made to us and the proportion that we are able to meet.
Clearly, we also need to do more to improve the experience of our service users who complain about alleged breaches of the Code of Conduct. We realise that Code cases tend to be complex. Also, very few result in investigation and/or further referral of the case – therefore, people are less likely to be satisfied with the outcome, impacting also their perception of our service overall. However, we will be reviewing this year our publications about our powers and process in relation to these cases, to ensure that we do all we can to improve the experience of our service users in this respect.
Finally, although we took great care designing the survey, several questions did not render sufficiently meaningful responses to be included in the overall analysis of the results. For example, only 30% of respondent dissatisfied with the outcome stated that they felt able to ask for a review. However, given the brevity of the survey, we lack more detailed information as to why that was the case. In another example, only 22% of the respondents were aware that we have a policy in respect of unacceptable behaviour. This may be because individuals would only be informed of this policy at the point when it would be needed. Again, however, we don’t have enough information to state that with certainty. We will make sure that we incorporate these lessons into the design of the survey in 2020/21.