Employee surveys have been with us a long time with roots in the science of industrial psychology and running organisations effectively.
Technology, societal shifts, and global events have changed how we think about surveys, how we do them and what they measure. What has not changed is their fundamental role in gathering employee feedback crucial to healthy and well-functioning organisations.
In this guide we discuss types of employee surveys, a process for creating impactful questionnaires and the other important steps for running effective employee surveys.
There are many types of employee surveys. One way to define them is to categorise and label them according to the focus of the themes they measure such as:
Often surveys that cover a range of themes are categorised as employee engagement surveys. Here employee engagement is used as an umbrella term encompassing multiple aspects of working life as well as employee engagement. A well-known example is Gallup’s Q12 employee engagement survey which covers 12 themes ranging from resources to company mission.
Surveys can also be categorised according to other features like:
Below are some of the most common types of survey based on these features.
It’s common for organisations to use multiple types of survey. A typical survey programme might start with a core survey, followed by a deep dive survey and a trend survey. In addition, employee experience / life-cycle surveys might run alongside these surveys.
Other organisations find just one survey a year, usually a core survey, works well for them.
Asking for feedback too often can lead to survey fatigue which results in low response rates and poor-quality feedback. It's important to remember that the underlying driver of survey fatigue is taking no action rather than the surveys themselves.
Surveys with no action quickly lose credibility as employees become frustrated that their views are not being heard.
It’s vital when planning a survey programme that adequate resources will be available to respond to every survey with meaningful actions and to be able to communicate what actions were taken before the next survey is run.
In our experience the minimum time for this cycle is three months meaning for most organisations with effective survey programmes three or four surveys per year is typical.
Of course there’s a difference for employee experience /employee life cycle surveys. These surveys are open continuously with invites being sent out daily or weekly to new starters, leavers etc.
Although the same employee will not repeat a new starter or leaver survey it is still important to respond to feedback and communicate the actions that take place to maintain the feeling amongst employees that responding to the organisation’s surveys is worthwhile.
The survey questionnaire is a critical foundation for an effective employee survey. Here's our step-by-step guide to creating one for your survey.
There are many reasons for conducting employee surveys. They give employees a sense that their views are important and that by sharing them they can have an impact on organisational change. For the organisation, they can help highlight good practices, monitor areas needing improvement and provide evidence for decisions.
Taking a step back, the bottom line for any survey should be to help the organisation reach its goals. For most organisations, improved organisational performance is a foundational goal but other intermediary goals are often the focus, such as those set by a strategic plan.
Focusing survey objectives on what might facilitate the organisation achieve its goals creates a more impactful survey. For example, there might be a concern that failure to attract and retain talent is holding the organisation back. In which case improving employee experience would be a great survey objective.
Increasing employee engagement might be the best survey objective where low productivity has been identified as a concern.
Although these points are simple and obvious it is important to state explicitly the objective of your survey so everyone is clear about its aims.
With the employee survey objective decided the next step is to identify the employee survey themes. Employee survey themes are the topics or latent factors measured by the survey questions.
The most effective surveys incorporate themes that have been identified by careful consideration of their relevance to the survey’s objective.
Researching what would have the greatest impact on the survey objective in the context of your organisation provides the best guide to the themes that should be included in the survey. Ideally this research is carried out by consulting employees and other stakeholders.
Consulting employees and other stakeholders about their views on the organisation and its working practices will suggest what could make a difference.
At the same time this research can be used to confirm that the right survey objective has been chosen in the first place. For example, is employee engagement the right thing to focus on to improve productivity? The biggest impact on the organisational issue might be different to the one initially identified.
Consulting employees may not always be feasible. In which case a good starting point is reviewing research published on the survey objective.
There is a large amount of research on most common survey objectives such as improving employee engagement and well-being. Work done by others in the field can provide inspiration for your survey themes.
Another option is to rely on the expertise of a survey provider who will be able to advise on what works well in their experience.
Dividing surveys themes into those that are measuring outcomes and those that are measuring the drivers of outcomes is helpful.
Usually there is just one outcome survey-theme which aligns with the survey objective such as improving employee engagement or well-being. The outcome survey-theme provides a measure for tracking progress.
Driver survey-themes are those that correlate with and potentially impact the outcome survey-theme. The relationship between outcome and driver survey-theme scan be used to understand which driver survey-themes will have the greatest effect on the outcome survey-theme to guide the actions that follow the survey.
With a clear understanding of the themes your survey will cover the next step is creating the questions you will use in your survey to measure the themes.
Drawing from any research you may have carried out with employees and your organisational knowledge you can start drafting your own questions. Following our guide below will help you create good survey questions.
Alternatively questions can come “off the shelf” from a survey provider or collected from other sources.
A good starting point for your questions is reviewing published research on your outcome survey-theme.
For example, on employee engagement the work of a group of psychologists from Utrecht University is highly regarded. They define engagement or work engagement as a state of mind characterised by high levels of vigour, dedication, and absorption.
Sample work engagement questions:
Other measures of engagement focus on commitment - employees' attachment to the organisation.
The style of these questions is typical of most measuring survey outcome-themes. They are intangible and whilst they are good for tracking progress and setting targets, they do not provide practical information about how to make improvements.
It’s important that questions measuring your driver survey-themes provide information that is actionable. In other words it should be clear from the question what action would be needed to improve what is being measured.
A short-cut to trawling through research is to use generative AI such as Chat GPT to collect survey questions.
Prompting Chat GPT to write questions which assess common themes such as employee engagement or organisational commitment produces reasonable questions in seconds that otherwise might have taken hours to discover reading through research reports.
As with all uses of generative AI, time must be dedicated to evaluating its output and validating it against other sources.
Likert scales are the most common. For employee surveys a scale ranging from strongly agree to strongly disagree is the most popular.
The key features of Likert scale are:
Other scales can be created to suit specific questions. For example,
Four to seven options are best – longer scales use up too much of respondents’ cognitive energy deciding on their response.
Likert scales are popular because they allow respondents to indicate whether they feel positive or negative as well as the strength of their feeling.
Dichotomous (yes / no) scales can be used but Likert scales provide richer data for statistical analysis.
There are differing views on whether to use a neutral option such as neither agree or disagree or to force respondents to choose a positive or negative response by not including a midpoint.
Research shows that results are not distorted either way. In other words, there’s no right or wrong answer, but importantly, once a decision has been made the same scale must be used across surveys to ensure valid comparisons.
Questions sourced from published research, created by generative AI, or collected from elsewhere are good starting points, but they will usually need to be edited to fit your organisational context.
You may also need to re-phrase your questions to match the response scales you have chosen.
In doing so care is needed to ensure that your final questions provide useful, reliable, and valid data.
Here are some tips for drafting good survey questions.
The length of the questionnaire will depend on the type of survey. There's a balance to be struck between sufficiently covering the things you would like to measure and the length of the questionnaire. If it's too long, there's a risk of low response rates or poor-quality responses with little thought given by respondents to answering the questions. Too short and it may miss important feedback.
The length of a questionnaire can be measured by the number of questions or how long it takes to complete. Of course, the two are related but time to complete is more helpful.
A well-designed questionnaire with 30 simple and unambiguous questions might take less time to complete than a poorly designed questionnaire with 15 questions.
Five to ten minute completion time is a reasonable target for a core survey with 30 - 50 questions.
Planning for survey analysis can start before any survey data are collected.
Most surveys are run on a confidential basis meaning that survey responses can be linked to an individual while individual responses are protected and not shared with anyone within the organisation.
Using this approach you can prepare a data file before the survey launches containing the details of employees who will be taking part in the survey.
As well as containing the contact details of employees for survey administration the file should include the demographics that will be used to cut the survey data in the analysis stage. Useful demographics are:
The benefits of this approach are reducing questionnaire length and ensuring the integrity of demographic data.
It is important to decide on the minimum number of respondents required in a demographic group for that group’s responses to be reported. Usually this is between 5 and 10 respondents. Setting a minimum number helps to protect respondents from managers trying to work out who responded in a particular way. The minimum number should be clearly communicated in pre-survey communication.
There are many options for analysing survey data. Most often the fundamental unit of analysis is percentages.
Usually response options are combined. For example, the percentages of respondents who responded “strongly agree” or “agree” are folded into a “percent positive” question-score.
Percent positive scores are popular because they are easy to interpret. And because of their dominance most survey benchmark data are “percent positive” scores.
It is helpful, particularly for a survey with a large number of questions, to combine question-scores to create theme-scores. Reporting by theme provides a good overview of the results.
Comparisons are the foundation of most survey reporting. Comparisons can be made against previous survey scores, across groups and against internal and external benchmarks.
Sorting scores by differences to the previous survey’s scores and / or external benchmarks is a good approach to identifying the key findings.
Comparisons across demographic groups is helpful to understanding the results in more depth.
Reports shared with departments or teams can include an internal benchmark comparison, i.e. the average percent positive question-scores for the whole organisation. Comparisons against internal benchmarks help managers understand the areas they need to focus on to bring their team up to the organisation’s level.
Sometimes changes at an aggregate level might might occur as a result of the change in the composition of groups between surveys. For example, it is common for new starters to be more positive than others. Improvements might be the result of more new starters taking part in the current survey than the previous survey. Comparing how individuals respond across surveys shows real change at an individual level.
Key driver analysis can be used to examine the relationship between your driver survey-themes and your outcome survey-themes.
Key driver analysis identifies the relative influence of your driver survey-themes on your outcome survey-themes.
The logic behind this approach is that driver survey-themes with the largest effect on your outcome survey-themes are the ones that you should focus on improving because doing so is more likely to lead to an improvement in your outcome survey-theme to meet your survey objective.
There are various approaches to key driver analysis from basic to advanced. Looking for the highest correlations between driver survey-theme scores and outcome survey-theme scores is the simplest approach. While this approach is common it is unreliable and not recommended.
Dominance analysis and relative weights analysis are more advanced approaches and although they have their critics, they are currently the best options for identifying key drivers.
There are various activities involved in the feedback and action planning stage of an employee survey.
Depending on the type of survey some may not be needed, yet whatever the type adhering to the principles of communicating the organisation’s reaction to the results and how it will address them is crucial.
It’s common for the feedback to be sequenced. In theory, results could be released to all employees and managers immediately after the survey closes but in practice most organisations prefer to cascade the results through the organisation, particularly for assessment / baseline surveys.
Timing is important. Minimising the time between the survey closing and sharing the results with everyone requires careful planning.
Thinking about co-ordinating the schedule of the survey with regular planned meetings of the senior management team helps so that shortly after the survey closes the results can be presented at a senior management team meeting.
Good communication helps too, such as giving employees a clear timeline of post-survey activities.
Once the senior management team has digested the results and agreed the key messages they can be communicated to the organisation.
Next, it’s the turn of managers to get involved. All managers should share and discuss the results with their teams and together agree and plan appropriate actions. It’s one of the most important steps and managers should be given support and advice on how to run feedback sessions effectively where appropriate.
Keeping up survey communication after the results have been shared ensures your survey continues to have an impact.
Communicating planned changes throughout the year and linking them back to the feedback from the surveys reminds employees that their voice has been heard.
Another useful communication approach is sharing individual accounts of changes and benefits that have been realised since the survey.
Regularly including the employee survey on the agenda of team meetings helps to maintain momentum and create accountability for follow-up actions.