Article Text

Download PDFPDF

Internet-based surveys: relevance, methodological considerations and troubleshooting strategies
  1. Vikas Menon1 and
  2. Aparna Muraleedharan2
  1. 1 Department of Psychiatry, Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Puducherry, India
  2. 2 Department of Anatomy, Pondicherry Institute of Medical Sciences, Puducherry, India
  1. Correspondence to Dr Vikas Menon; drvmenon{at}gmail.com

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Sir

Internet-based surveys have steadily gained popularity with researchers because of their myriad advantages such as ability to reach a larger pool of potential participants within a shorter period of time (vis-à-vis face-to-face surveys), study subjects who maybe geographically dispersed or otherwise difficult to access and efficiency of data management and collation.1 2 This is in addition to obvious reasons such as convenience, relative inexpensiveness and user-friendly features such as comfortable pace and enhanced sense of participant control.

With the advent of the COVID-19 pandemic and dwindling opportunities for face-to-face data collection, internet-based tools offer a powerful alternative to rapidly collect data. Moreover, they could be useful tools from a public health perspective to track public perceptions, myths and misconceptions3 in times of disaster. Many methodological issues confront a prospective researcher while designing online questionnaires/surveys. A few of them and some corresponding suggestions for troubleshooting are outlined below:

  1. Web or mailed questionnaire?—A meta-analysis of 39 studies4 concluded that response rates to mail surveys are, in general, higher than web surveys. Interestingly, two important factors underlying this variation were the type of respondents and medium of follow-up reminders. While college students were more responsive to web surveys, physicians and laypersons were found to be more receptive to mail surveys. Further, follow-up reminders were more effective when given by mail, probably owing to greater personalisation, than the web. Recently, a hybrid method called web-push surveys, wherein, initial and subsequent follow-up contacts are made by mail to request a response by web was found to be a parsimonious method to elicit responses.5 However, results on combining web and mail surveys have not been consistent. Examination of different methods of using web portals for surveying consumers about their experience with medical care revealed that response rates were higher for mail questionnaires, compared with web-based protocols.6 A study on physicians7 found that initial mailing of questionnaire followed by web surveying non-responders increased response rates and enhanced the representativeness of the sample. For surveys with shorter time frame, the reverse method, namely initial web survey followed by mailing the questionnaire to non-responders was recommended, though key outcome variables did not differ between these data collection methods. The message appears to be that hybridisation of surveys involving both web and mail surveys may be a resource-effective method to augment response rates, though the optimal method of combining them may differ based on respondent and survey characteristics.

  2. How to enhance response rates?—Response rates to email surveys are highly variable and traditionally in the range of 25%–30%, especially without follow-up reminders or reinforcements.8 Reasons for this include survey fatigue, competing demands and privacy concerns. These response rates are much lower than traditional response rates to telephonic surveys, which in turn are less than response rates to surveys using the face-to-face method.9 10 However, with the use of multimode approaches, the response rate can be improved to as much as 60%–70%.11 Substantial evidence exists on methods to enhance response rates in internet surveys. More than a decade ago, a Cochrane review suggested prenotifying respondents, shortening questionnaires, incentivising responses and multiple follow-up contacts to enhance response rates.12 Subsequently, personalised invitations,13 personalisation of reminders,14 dynamic strategies such as changes in the wording of reminders15 and inclusion of Quick Response (QR) codes16 have all been found to augment response rates.

  3. How long is too long?—In academic surveys, the association between questionnaire length and response rates, particularly for web surveys, is weak and inconsistent.17 18 Considering the attention span of an average adult, 20 min has been recommended as the maximum length of web surveys by market research experts.19 However, it is difficult to be dogmatic about this as many non-survey factors, such as familiarity with the device and platform, may affect interest and attention span.20 On current evidence, restricting survey length to below 13 min may offer the best balance between response burden and quality.21

  4. What are the optimal frequency and number of reminders?—Sending reminders have been noted to positively influence survey response rates.21 Response rates appear to improve till a maximum of three to four reminders, beyond which concerns about spamming or pressurising the respondents increase.14 Less is known about the optimal frequency of reminders. Blumenberg and others (2019) noted that sending reminders once every 15 days was associated with higher response rates, especially to the first two questionnaires in a series. Researchers should carefully weigh the efforts involved in sending multiple reminders against the risks of survey fatigue and lower engagement in later surveys, especially when multiple rounds of surveys are indicated.

  5. How to minimise the non-representativeness of the sample? In any internet survey, the respondents are not selected through probability sampling and this may affect the generalisability of findings. A larger sample, too, will not solve this issue because information about non-responders is not available. One solution, though more resource-intensive, would be to randomly select the required number of respondents from a defined list (such as professional membership directories in a survey of professionals), seek their consent through telephone calls, collect basic sociodemographic information from those who decline to participate and send the questionnaire to only those who consent to participate, with a request not to forward the questionnaire to anyone else.22 This way, we prevent snowballing which can also affect sample representativeness. Further, if the sociodemographic profile of those who declined matches with that of those who consented to participate, then we can conclude that responders are representative of the intended population. Of course, this strategy may only work when a list of potential respondents is available. To avoid missing out those with no internet access or low-frequency users, investigators may contact people, randomly selected from a list, through another mode (telephone, face to face or mail) and ask them to complete the survey. Further, allowing respondents to complete the survey in a variety of modes (web/mail/telephone) may be another strategy to avoid missing out on those with limited access to the web.

In conclusion, hybrid survey designs incorporating email surveys with web-based reminders probably lead to quicker and higher response rates. Fortnightly, or even weekly reminders (maximum of 3–4), would serve to enhance response rates. Questionnaire length is not associated with response rates, but limiting the survey to under 20 min may be a pragmatic consideration. Other suggestions to improve response rates that must be treated as preliminary due to limited evidence include sending personalised email reminders, modifying reminder contents over the survey life-cycle and creating QR codes that may seamlessly direct respondents to the survey.

References

Dr Vikas Menon obtained a bachelor's degree from the Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Puducherry in 2004. He completed postgraduate training in Psychiatry in 2008 from JIPMER. He has been working as a faculty in the Jawaharlal Institute of Postgraduate Medical Education and Research (JIPMER), Puducherry since 2012 and is currently an Additional Professor of Psychiatry. His main research interests include digital psychiatry, suicidology and mood disorders.


Embedded Image

Footnotes

  • Contributors VM conceptualised the work, did the review of the literature and wrote the first draft of the manuscript. AM co-conceptualised the work, contributed to the review of the literature and revised the first draft of the manuscript for intellectual content. Both authors read and approved the final version of the manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; externally peer reviewed.